title: PriPHiT PrivacyPreserving Hierarchical Training of Deep Neural Networks

publish date:

2024-08-09

authors:

Yamin Sepehri et.al.

paper id

2408.05092v1

download

abstracts:

The training phase of deep neural networks requires substantial resources and as such is often performed on cloud servers. However, this raises privacy concerns when the training dataset contains sensitive content, e.g., face images. In this work, we propose a method to perform the training phase of a deep learning model on both an edge device and a cloud server that prevents sensitive content being transmitted to the cloud while retaining the desired information. The proposed privacy-preserving method uses adversarial early exits to suppress the sensitive content at the edge and transmits the task-relevant information to the cloud. This approach incorporates noise addition during the training phase to provide a differential privacy guarantee. We extensively test our method on different facial datasets with diverse face attributes using various deep learning architectures, showcasing its outstanding performance. We also demonstrate the effectiveness of privacy preservation through successful defenses against different white-box and deep reconstruction attacks.

QA:

coming soon

编辑整理: wanghaisheng 更新日期:2024 年 8 月 12 日