Overcoming catastrophic inference in neural networks through accurate overlapping representations
Publié le : 8 octobre 2019
Catastrophic forgetting is the fact that a neural network formed on a first set of elements can forget them when it learns a second set. Therefore, there can be no incremental learning. This is now becoming extremely limiting if we want to develop autonomous systems capable of dealing with situations that could not have been envisaged during the first learning phase. And this is the next lock of machine learning. We have chosen a model of cognitive psychology of human memory developed B. Ans and S. Rousset to solve this question because unlike all models in the literature, it is the only one to preserve the plasticity of the network.
This model has already been implemented in a formal neural network with the Tensor Flow tool for a handwritten number recognition application. We would like to explore the possible improvement of random noise to properly characterize the function learned for the network. We have already found that the gain resulting from a good selection of the starting noise is more than 90% on the total performance of the system, which makes us think that it is necessary to study the impact of noise on performance.
The internship will take place in three phases:
– State of the art on random noise generation and a priori information on the distribution of the different classes.
– Analyses of the signal-to-noise ratio of the pseudo data.
– Propose/select the best alternatives for generating the starting noise.
If you are interested by the internship, please send your CV and motuvation metter to email@example.com