Design of neural networks adapted to FHE and MPC
Published : 8 February 2020
In this thesis, the student will investigate the variety of scenarios in which homomorphic encryption provides a meaningful countermeasure to confidentiality threats applying to neural net systems. To do this, she/he will leverage on the many degrees of freedom in neural network design as well as homomorphic encryption scheme design to propose specialized networks and FHE-schemes efficiently working together.
The candidate will attempt to push this application/FHE co-design strategy to its limits in order to notably: evaluate deep neural networks over encrypted data (input/output privacy), evaluate encrypted deep networks over clear or encrypted inputs (model/output privacy with optional input privacy). This will require to define an efficient FHE-neuron as well as to bring privacy-by-design at all stages of its lifecycle: from the unitary encrypted-domain execution of the neuron itself, to input-private and/or model-private evaluation of networks of that neuron, and then up to the training of networks of such neurons (over clear data).
In addition, she/he will investigate the use of MPC for the same evaluations. Ideally, she/he will identify situations where using either FHE or MPC are more suitable for ensuring data confidentiality. In addition, synergies between FHE and MPC usage will be studied.
Furthermore, implementing proof of concepts will provide clear experimental evidences of either the practicality of marrying a neural network technique with a specific homomorphic encryption or MPC scheme or measuring/estimating the remaining gap to achieve the evaluation of networks of practically relevant size and complexity.