[1706.06551] Grounded Language Learning in a Simulated 3D World

arxiv.org

Researchers from DeepMind propose an agent that learns to perform natural language commands (think: "pick the red object/hat/zebra next to the green object") in a simulated environment. The key to learning are unsupervised auxiliary frame and language prediction objectives. Related: a gated-attention model from CMU; other related research from OpenAI, Lazaridou et al. (ICLR, 2017), and others starts from multi-agent dialog and shows that natural language may or may not naturally develop. 

Read more...
Linkedin

Want to receive more content like this in your inbox?