SPAD Imager for HDR ToF using multimodal data fusion
Published : 10 January 2019
Depth sensors are currently a very high trending topic. Indeed, in the fields of autonomous vehicles, portable electronic devices and the Internet of Things, new technology enablers now tend to provide handy 3D image data for future innovative end-user applications. There is a great diversity of 3D sensor types, either using passive imaging (depth from defocus, stereovision, phase pixels…) or using active imaging (ultrasounds, structured light, Time-of-Flight…). Each of these systems addresses specifications in terms of depth dynamic range (accuracy of the measurement versus maximum distance). In this thesis, we will study the specific case of Single Photon Avalanche Diodes (SPAD). Recent scientific results regarding this electro-photonic component demonstrate its relevance in the context of Time-of-Flight (ToF) imaging, especially in the case of integration in a 3D-stacked design flow exhibiting a pixel pitch of the order of ten micrometers. However, the nature of the data gathered by this type of component requires significant signal processing within the sensor to extract relevant information. This thesis will aim to revise traditional approaches related to histogram processing by directly extracting statistical features from raw data. Depending on the background and skills of the PhD candidate, two research axes would be investigated. First, on the hardware side, possible modifications of SPAD based sensor architecture in order to provide “augmented” multi-modal information. Second, on the theoretical and algorithmic side, data fusion methods to improve the final reconstruction rendering of depth maps from sensed data.