Federated Learning with Differential Privacy: Balancing Model Performance and Data Protection in Distributed AI Systems
DOI:
https://doi.org/10.65021/mwsj.v1.i1.1Keywords:
Federated Learning, Differential Privacy, Privacy-Preserving, Machine Learning,, Distributed Systems, Data ProtectionAbstract
As machine learning systems become increasingly prevalent in privacy-sensitive domains, the need for training high-performance models while preserving individual privacy has become paramount. This paper presents a comprehensive analysis of federated learning combined with differential privacy mechanisms, addressing the fundamental tension between model utility and privacy protection. We propose an adaptive noise calibration framework that dynamically adjusts privacy parameters based on model convergence patterns and client heterogeneity. Through extensive experiments on benchmark datasets, we demonstrate that our approach achieves superior privacy-utility trade-offs compared to existing methods, maintaining competitive model accuracy while providing strong theoretical privacy guarantees. Our results show that careful calibration of differential privacy parameters can reduce the performance degradation typically associated with privacy-preserving federated learning from 15-20% to 5-8% across various machine learning tasks.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Milky Way Scientific Journal

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.