Improving Language Understanding by Generative Pre-Training

s3-us-west-2.amazonaws.com

This paper by OpenAI is in the same line as recent approaches such as ELMo and ULMFiT. Compared to those, the proposed approach uses a more expressive encoder (a Transformer) and simple task-specific transformations (e.g. concatenating premise and hypothesis for entailment). It achieves state-of-the-art results across a diverse range of tasks. A cool aspect is that the model can even perform a form of zero-shot learning using heuristics.

Read more...
Linkedin

Want to receive more content like this in your inbox?