Securing NextG Systems against Poisoning Attacks on Federated Learning: A Game-Theoretic Solution
MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE(2023)
Abstract
This paper studies the poisoning attack and defense interactions in a federated learning (FL) system, specifically in the context of wireless signal classification using deep learning for next-generation (NextG) communications. FL collectively trains a global model without the need for clients to exchange their data samples. By leveraging geographically dispersed clients, the trained global model can be used for incumbent user identification, facilitating spectrum sharing. However, in this distributed learning system, the presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges. To address this challenge, a proactive defense mechanism is employed in this paper to make informed decisions regarding the admission or rejection of clients participating in FL systems. Consequently, the attack-defense interactions are modeled as a game, centered around the underlying admission and poisoning decisions. First, performance bounds are established, encompassing the best and worst strategies for attackers and defenders. Subsequently, the attack and defense utilities are characterized within the Nash equilibrium, where no player can unilaterally improve its performance given the fixed strategies of others. The results offer insights into novel operational modes that safeguard FL systems against poisoning attacks by quantifying the performance of both attacks and defenses in the context of NextG communications.
MoreTranslated text
Key words
Federated learning,machine learning,wireless network,security,poisoning attack,resilience,game theory,wireless signal classification,NextG communications
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined