Optimized coding technics for the design of deep neural network hardware accelerators

Published : 8 February 2020

Artificial neural network based approaches have significantly improved performance in many areas such as classification, segmentation, and so on. The effectiveness of this approach is well established and the number of future applications increases.

However, due to their computational complexity and their memory need, these networks are difficult to embed on low power platforms.

When porting these networks on embedded platforms, a large variety of hardware constraints have to be taken into account. To overcome these difficulties several research works have produced different techniques that allow to reduce memory and computation footprint of artificial neural networks: reduction of the number of parameters, low precision quantization, etc. This thesis aims to go further into the optimization by working on the data coding.

On this thesis, we proposed to explore a new method by working directly on the information coding of the neural network. This coding method would aim to unify two existing coding models: the vector model and the spike model, while keeping in perspective the hardware implementation.

More information