DeepMind Finds Way to Overcome AI’s Forgetfulness Problem

www.bloomberg.com

Jeremy Kahn:

Neural networks, software which is loosely based on the structure of synapses in the human brain, are considered the best machine learning technique for language translation, image classification and image generation. But these networks suffer from a major flaw scientists call “catastrophic forgetting.” They exist in a kind of perpetual present: every time the network is given new data, it overwrites what it has previously learned.
In human brains, neuroscientists believe that one way in which memory works is that connections between neurons that seem important for a particular skill become less likely to be rewired. The DeepMind researchers drew on this theory, known as synaptic consolidation, to create a way to allow neural networks to remember. 

Amazing -- even more amazing:

The researchers tested the algorithm on ten classic Atari games, which the neural network had to learn to play from scratch. DeepMind had previously created an AI agent able to play these games as well or better than any human player. But that earlier AI could only learn one game at a time. If it was later shown one of the first games it learned, it had to start all over again.

Read more...
Linkedin

Want to receive more content like this in your inbox?