Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distributed and Private Machine Learning

AsymmetricML: An Asymmetric Decomposition Framework for Privacy-Preserving DNN Training and Inference

Yue Niu · Salman Avestimehr


Abstract: Leveraging specialized parallel hardware, such as GPUs, to conduct DNN training/inference significantly reduces time. However, data in these platforms is visible to any party, which, in certain circumstances, raises great concerns of data misuse. Trusted execution environments (TEEs) protect data privacy by performing training/inference in a secure environment, but at the cost of serious performance degradation. To bridge the gap between privacy and computing performance, we propose an \emph{asymmetric} model-splitting framework, AsymmetricML, to (1) exploit computing power in specialized parallel hardware; and (2) preserve data privacy in TEEs during DNN training/inference. AsymML asymmetrically splits a DNN model into two parts: the first part features most sensitive data information but less computation; while most computation is performed in the second part. Evaluations on typical models (VGG, ResNet) shows the framework delivers $5.9\times$ speedup in model inference, and $5.4\times$ in model training compared with TEE-only executions.

Chat is not available.