Continual learning for multimodal dataset

Published : 8 February 2020

Like any embedded systems, edge AI (eAI) connects with its environment, via sensors and possibly actuators. It has to handle a variety of sensor inputs, in a multimodal environment. Even though several artificial neural networks (ANN) already exist, each of them handling one specific modality, there is still a huge challenge to build an ANN for multimodality. In the international state of the art, a spiking neural network classifying images and sound (MNIST dataset +sound) demonstrated better recongition rate and better robustness. The challenge is thus to find a generic approach, able to take State-of-the-Art modality-specific ANNs, and integrate them into a multimodal ANN.

Another challenge for eAI is the capability to adapt to a new situation, e.g., a given user or a specific environment. An AI algorithm, even though it has been trained on a large global database, has to adapt. We mamed this property custumisation. The challenge follows: how an ANN, trained on a global database, could be fine-tuned for a specific use-case (e.g., a given user, a specific environment)? From an unimodal bio-inspired model of incremental Learning, the second part of the thesis will focus on coupling multimodal and custumisation aspects.

More information
X