Foto 7

Model Poisoning Detection and Dismissal Safeguarding Federated Learning Against Malicious Attacks

Written by Md. Auhidur Rahman; Akash Chandra Debnath; Stefano Giordano

Abstract:

Federated learning's decentralised nature exposes it to significant security risks, particularly model poisoning attacks. In this research, we introduce the Model Poisoning Detection and Dismissal (MPDD) mechanism, which combines anomaly detection and model verification to identify and prevent malicious updates during the aggregation process. To bolster MPDD's effectiveness, we propose a novel Distributed Defense Network (DDN) architecture, enhancing security and resilience by leveraging the collective power of multiple clients to collaboratively detect and mitigate attacks. Our extensive experiments across various federated learning scenarios and three real-world datasets (Purchase, Patients, and News FLR) demonstrate MPDD's robust performance. It effectively identifies and dismisses poisoned models (up to 100%) while maintaining high model accuracy (up to 96%) and convergence rates. We tested MPDD against severe model poisoning attacks, including MPAF, historical, random, and targeted attacks, proving its superior detection and dismissal capabilities. Additionally, we evaluate MPDD's computational overhead and time complexity, showcasing its practical feasibility for resource-constrained edge devices. A comparative analysis against state-of-the-art defence mechanisms highlights MPDD's advantages in accuracy, efficiency, and resilience against sophisticated attacks.