Bio-inspired approach for adversarial machine learning
Published : 1 October 2019
the target of the subject is to analyze a bio-inspired approach based on the so-called Catastrophic Forgetting paradigm to better understand the inherent mechanisms of adversarial attacks and propose new defense scheme against such integrity flaws of classical Machine Learning models (here, deep neural networks). Thus, the topic of the post-doctoral position gathers two major critical issues in the field of Machine Learning and more particularly for deep neural networks:
– Catastrophic Forgetting (or Catastrophic Inference) is a phenomenon referring to the predisposition of a model to forget previously learned information when training with new one. More and more research efforts are focused on overcoming this critical behavior. Previous works lead in the DCOS department in Grenoble prove the relevance and efficiency of re-injections techniques to tackle the Catastrophic Forgetting issue for deep neural network.
– Adversarial Examples refer to an integrity attack where an adversary try to tamper inputs at inference time to fool the decision of a model. This issue is now a popular ML topic with a very dynamic community but with still major open questions and a critical lack of robust defense strategies.
The innovative idea of the project associated to this post-doctoral position is to use research from Neuroscience focused on “Catastrophic Forgetting” to design and evaluate new defense strategies against adversarial examples. The main goal of the post-doctoral work will be to investigate the use of specific networks associated to reinjection processes, as developed in a human memory model and explore how the reinjection procedure use to avoid the catastrophic forgetting issue can alleviate the number of miss-classifications produced by adversarial attacks.