Distributed Memory-centric Computing architecture for AI applications, using advanced 3D and NVM Technologies

Published : 12 March 2020

With the revolution of AI applications, AI algorithms are getting more and more demanding in terms of computing and memory requirements, while it is envisioned that new devices implementing these AI features should be available at the “edge”, meaning close to the final user (portable devices, automotive, IoT, etc) and not anymore only in the cloud. This implies very strong requirement in terms of memory capacity to enable learning at the edge with compliant energy efficiency. The PhD consists in proposing and exploring new Distributed Memory-centric Computing architecture for AI applications, using advanced technologies to overcome the current issues (memory capacity, memory bandwidth, energy per inference, and learning capability). Recent Non Volatile Memory (NVM) technologies and 3D integration technologies offer dense memory integration while bringing the memory closer to computing cores.

The architecture challenge consists in defining the adequate system partitioning, the distributed communication mechanisms, the memory/computing ratios requirements, to in-fine obtain the targeted distributed memory centric computing architecture. The work will consist in architecture and application system exploration through system modeling, and may lead to a testchip to validate the proposed concept.

The PhD will take place in an active collaboration between University of Stanford (CA, USA) and CEA (Grenoble, France).

More information
X