Neural networks: Keeping catastrophic forgetting at bay

Categorie(s) : News, Research

Published : 1 April 2019

To effectively respond to situations they have never learned before, tomorrow’s neural networks will have to keep a phenomenon known as “catastrophic forgetting” at bay. Catastrophic forgetting occurs when a new set of data “overwrites” previous data instead of adding to it.

Researchers from Leti, List, and LPNC* recently developed a novel solution to link knowledge in natural intelligence with knowledge in artificial intelligence. They created a model made up of two neural networks: The first network learns a given number of events, and then sends a sampling of the knowledge it has acquired to the second network. The second network then combines the sampling with new events and sends the information back to the first network. The result is a kind of incremental learning that is very close to how human memory works.

A combined research project (Instituts Carnot and a PhD dissertation) kicked off in October 2018 to evaluate this dual network’s capabilities.

*Psychology and Neurocognition Laboratory (CNRS, UGA, UdS)

 

Contact: marina.reyboz@cea.fr

More information
X