Paper 1: Enhancing Federated Learning Security with a Defense Framework Against Adversarial Attacks in Privacy-Sensitive Healthcare Applications
Abstract: Federated learning (FL) is a cutting-edge method of collaborative machine learning that lets organizations or companies train models without exchanging personal information. Adversarial attacks such as data poisoning, model poisoning, backdoor attacks, and man-in-the-middle attacks could compromise its accuracy and reliability. Ensuring resistance against such risks is crucial as FL gets headway in fields like healthcare, where disease prediction and data privacy are essential. Federated systems lack strong defenses, even though centralized machine learning security has been extensively researched. To secure clients and servers, this research creates a framework for identifying and thwarting adversarial attacks in FL. Using PyTorch, the study evaluates the framework’s effectiveness. The baseline FL system achieved an average accuracy of 90.07%, with precision, recall, and F1-scores around 0.9007 to 0.9008, and AUC values of 0.95 to 0.96 under benign conditions. With AUC values of 0.93 to 0.94, the defense-enhanced FL system showed remarkable resilience and maintained dependable classification (precision, recall, F1-scores ~0.8590–0.8598), despite a 4.1% accuracy decline to 85.97% owing to security overhead. With an 84.33% attack detection rate, 99.32% precision, 96.62% accuracy and a low false positive rate of 0.15%, the defense architecture performed exceptionally well in adversarial attacks. Trade-offs were identified via latency analysis: the defense-enhanced system stabilized at 54 to 56 seconds, while the baseline system averaged 13-second rounds. With practical implications for safe, robust machine learning partnerships, these findings demonstrate a balance between accuracy, efficiency and security, establishing the defense-enhanced FL system as a reliable option for privacy-sensitive healthcare applications.
Keywords: Federated learning; machine learning; privacy; adversarial attacks; defense framework; global model; healthcare; disease prediction