ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Learning Longer-term Dependencies in RNNs with Auxiliary Losses

Trieu Trinh · Andrew Dai · Thang Luong · Quoc V Le

East Meeting Level 8 + 15 #6

We present a simple method to improve learning long-term dependencies in recurrent neural networks (RNNs) by introducing unsupervised auxiliary losses. These auxiliary losses force RNNs to either remember distant past or predict future, enabling truncated backpropagation through time (BPTT) to work on very long sequences. We experimented on sequences up to 16000 tokens long and report faster training, more resource efficiency and better test performance than full BPTT baselines such as Long Short Term Memory (LSTM) networks or Transformer.

Live content is unavailable. Log in and register to view live content