Learning to Skim Text (ACL 2017)


As our models become more accurate, runtime and efficiency gain in importance. For reading comprehension, it is still difficult to have an RNN read a book or very long document and answer questions about it. Yu et al. use RL to train a model to learn how far to jump after reading a few words of the input text. By doing this, the model is up to 6x faster at the same accuracy as an LSTM on four different tasks.


Want to receive more content like this in your inbox?