Faculty Mentor Information

Dr. Hao Chen, Boise State University

Additional Funding Sources

Sponsored by National Science Foundation Award # 2244596.

Presentation Date

7-2025

Abstract

Federated Learning (FL) is an emerging machine learning approach that proposes to train models collaboratively on user’s data in distributed environments. In FL, a central server holds a global model which are sent out to clients. The clients train their models on their private data, and send back gradients or parameter differences for aggregation. FL introduces its own issues with security and privacy, where attackers can poison the model to degrade performance or attempt to steal user’s private data through inference attacks. Past research on gradient inversion, a form of inference attack, shows these sent gradients in FL can be exploited. Attackers can reconstruct training data, breaking FL’s promised privacy. In this work, we will investigate how privacy leakage of FL models with different hyperparameters are affected using the Gradient Similarity (GS) attack, an inverted gradient attack using cosine similarity to optimize the attack process. We will study how single image recovery, batch image recovery, and multiple gradient descent steps affect the effectiveness of the attack. We will provide a thorough analysis using different metrics to determine the privacy leakage of the reconstructed input, as well as leave our code and exact experiment results in this paper for reproducibility.

Share

COinS
 

Inverted Gradient Attacks: Testing Priacy Attacks and Data Leakage in Federated Learning

Federated Learning (FL) is an emerging machine learning approach that proposes to train models collaboratively on user’s data in distributed environments. In FL, a central server holds a global model which are sent out to clients. The clients train their models on their private data, and send back gradients or parameter differences for aggregation. FL introduces its own issues with security and privacy, where attackers can poison the model to degrade performance or attempt to steal user’s private data through inference attacks. Past research on gradient inversion, a form of inference attack, shows these sent gradients in FL can be exploited. Attackers can reconstruct training data, breaking FL’s promised privacy. In this work, we will investigate how privacy leakage of FL models with different hyperparameters are affected using the Gradient Similarity (GS) attack, an inverted gradient attack using cosine similarity to optimize the attack process. We will study how single image recovery, batch image recovery, and multiple gradient descent steps affect the effectiveness of the attack. We will provide a thorough analysis using different metrics to determine the privacy leakage of the reconstructed input, as well as leave our code and exact experiment results in this paper for reproducibility.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.