Attacking Federated Learning: A Variety of Novel Poisoning Attacks
Faculty Mentor Information
Dr. Hao Chen (Mentor), Boise State University
Presentation Date
7-2024
Abstract
There has been significant research done into attacks against Federated Learning (FL) in machine learning models. In particular, poisoning attacks are found to be effective against some models, where a malicious actor submits false data in order to further an objective. Our research focuses on creating better and more destructive poisoning attack algorithms with the goal of figuring out weaknesses in current FL models. To this end, we worked on creating several new attacks of the poisoning variety. We focused on decreasing the accuracy of the model so it would be less useful and adding a secret extra task that would allow a malicious user some control over the model. To do this, we scrambled the images in the dataset used to train the model in a variety of specific ways. We hope that this research will help to further the work done to improve security in FL.
Attacking Federated Learning: A Variety of Novel Poisoning Attacks
There has been significant research done into attacks against Federated Learning (FL) in machine learning models. In particular, poisoning attacks are found to be effective against some models, where a malicious actor submits false data in order to further an objective. Our research focuses on creating better and more destructive poisoning attack algorithms with the goal of figuring out weaknesses in current FL models. To this end, we worked on creating several new attacks of the poisoning variety. We focused on decreasing the accuracy of the model so it would be less useful and adding a secret extra task that would allow a malicious user some control over the model. To do this, we scrambled the images in the dataset used to train the model in a variety of specific ways. We hope that this research will help to further the work done to improve security in FL.