Massively parallel in-memory computing architecture

Published : 15 July 2019

Systems-on-chip (SoCs) for embedded computing have always been constrained by memory bandwidth. Nowadays, with the development of application data-intensive, cost (latency, energy) related to memory access for data computation are significantly increasing.

A new computing paradigm consisting in performing data computation within the memory (IMC: In-Memory Computing) has been proposed: the idea is to process data where they are stored in order to save energy and latency. Clear separation between computing and storage units is vanishing leading to very new architectures.

The objective of this thesis work is to define a massively parallel in-memory computing architecture supporting the interconnection of a matrix of computing tiles based on IMC memory for parallel execution (multiprocessor) and parallel data access (multiple memory banks).

The thesis will be based on on-going work in the lab related to SRAM memory and will address higher density memory types.

The subject will require an exploratory approach through modeling of the proposed architecture in relation with the targeted applications (big data, artificial intelligence). Design and silicon implementation of innovative blocks of the architecture will validate to proposed concepts.

More information