AICurious Logo

What is: Asymmetrical Bi-RNN?

SourceAsymmetrical Bi-RNN for pedestrian trajectory encoding
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

An aspect of Bi-RNNs that could be undesirable is the architecture's symmetry in both time directions.

Bi-RNNs are often used in natural language processing, where the order of the words is almost exclusively determined by grammatical rules and not by temporal sequentiality. However, in some cases, the data has a preferred direction in time: the forward direction.

Another potential drawback of Bi-RNNs is that their output is simply the concatenation of two naive readings of the input in both directions. In consequence, Bi-RNNs never actually read an input by knowing what happens in the future. Conversely, the idea behind U-RNN, is to first do a backward pass, and then use during the forward pass information about the future.

We accumulate information while knowing which part of the information will be useful in the future as it should be relevant to do so if the forward direction is the preferred direction of the data.

The backward and forward hidden states (htb)(h^b_t) and (htf)(h^f_t) are obtained according to these equations:

\begin{equation} \begin{aligned} &h_{t-1}^{b}=R N N\left(h_{t}^{b}, e_{t}, W_{b}\right) \ &h_{t+1}^{f}=R N N\left(h_{t}^{f},\left[e_{t}, h_{t}^{b}\right], W_{f}\right) \end{aligned} \end{equation}

where WbW_b and WfW_f are learnable weights that are shared among pedestrians, and [,][\cdot, \cdot] denotes concatenation. The last hidden state hTobsfh^f_{T_{obs}} is then used as the encoding of the sequence.