Federated Learning with Differential Privacy: Balancing Model Performance and Data Protection in Distributed AI Systems

Authors

  • Tamara Mohamed Engineering of computer techniques department, College of Engineering Technology/University of kut , Iraq Author

DOI:

https://doi.org/10.65021/mwsj.v1.i1.1

Keywords:

Federated Learning, Differential Privacy, Privacy-Preserving, Machine Learning,, Distributed Systems, Data Protection

Abstract

As machine learning systems become increasingly prevalent in privacy-sensitive domains, the need for training high-performance models while preserving individual privacy has become paramount. This paper presents a comprehensive analysis of federated learning combined with differential privacy mechanisms, addressing the fundamental tension between model utility and privacy protection. We propose an adaptive noise calibration framework that dynamically adjusts privacy parameters based on model convergence patterns and client heterogeneity. Through extensive experiments on benchmark datasets, we demonstrate that our approach achieves superior privacy-utility trade-offs compared to existing methods, maintaining competitive model accuracy while providing strong theoretical privacy guarantees. Our results show that careful calibration of differential privacy parameters can reduce the performance degradation typically associated with privacy-preserving federated learning from 15-20% to 5-8% across various machine learning tasks.

Downloads

Published

2025-09-09

How to Cite

Federated Learning with Differential Privacy: Balancing Model Performance and Data Protection in Distributed AI Systems. (2025). Milky Way Scientific Journal, 1(1), 1-10. https://doi.org/10.65021/mwsj.v1.i1.1