Evaluation of Recent Developments in Federated Learning
Research Article
Open Access
CC BY

Evaluation of Recent Developments in Federated Learning

Yuhao Lu 1*
1 College of Science, Mathematics and Technology, Wenzhou - Kean University, No.88 Daxue Road, Ouhai District, Wenzhou City, Zhejiang Province, China, 325060
*Corresponding author: luyuh@kean.edu
Published on 13 August 2025
Journal Cover
ACE Vol.184
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-307-9
ISBN (Online): 978-1-80590-308-6
Download Cover

Abstract

Federated learning (FL) is a crucial technology for healthcare, IoT, and finance applications. This paper evaluates recent advancements in FL from 2023 to 2025, focusing on optimization algorithms, privacy-preserving techniques, communication efficiency, and real-world applications. It compares algorithms like FedAvg, FedProx, SCAFFOLD, and FedDyn, assessing their performance under data heterogeneity and communication constraints. Privacy techniques like differential privacy and secure aggregation are evaluated for accuracy and computational overhead. Communication-efficient methods and real-world deployments are also analyzed. The evaluation offers actionable insights for selecting appropriate FL methods for specific use cases.

Keywords:

Federated learning, optimization algorithms, privacy preservation, communication efficiency, critical evaluation

View PDF
Lu,Y. (2025). Evaluation of Recent Developments in Federated Learning. Applied and Computational Engineering,184,1-6.

References

[1]. McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. Artificial Intelligence and Statistics, 1273-1282. https: //proceedings.mlr.press/v54/ mcmahan17a.html

[2]. Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, 37(3), 50–60. https: //doi.org/10.1109/msp.2020.2975749

[3]. Chai, D., Wang, L., Yang, L., Zhang, J., Chen, K., & Yang, Q. (2023). A survey for federated learning evaluations: Goals and measures. arXiv preprint arXiv: 2308.11841. https: //arxiv.org/abs/2308.11841

[4]. Baumgart, G. A., Shin, J., Payani, A., Lee, M., & Rao, K. R. (2024). Not All Federated Learning Algorithms Are Created Equal: A Performance Evaluation Study. ArXiv.org. https: //arxiv.org/abs/2403.17287

[5]. Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečný, J., McMahan, H. B., Smith, V., & Talwalkar, A. (2019). LEAF: A Benchmark for Federated Settings. ArXiv: 1812.01097 [Cs, Stat]. https: //arxiv.org/abs/1812.01097

[6]. Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated Optimization in Heterogeneous Networks. ArXiv: 1812.06127 [Cs, Stat]. https: //arxiv.org/abs/1812.06127

[7]. Acar, D. A. E., Zhao, Y., Navarro, R. M., Mattina, M., Whatmough, P. N., & Saligrama, V. (2021). Federated Learning Based on Dynamic Regularization. ArXiv: 2111.04263 [Cs]. https: //arxiv.org/abs/2111.04263

[8]. Wei, K., Li, J., Ding, M., Ma, C., Yang, H. H., Farokhi, F., ... & Poor, H. V. (2020). Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15, 3454–3469. https: //ieeexplore.ieee.org/document/9069945

[9]. So, J., Guler, B., & Salman, A. A. (2020). Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning. ArXiv.org. https: //arxiv.org/abs/2002.04156

[10]. Hardy, S., Henecka, W., Ivey-Law, H., Nock, R., Patrini, G., Smith, G., & Thorne, B. (2017). Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. ArXiv: 1711.10677 [Cs]. https: //arxiv.org/abs/1711.10677

[11]. Li, X., Gu, Y., Dvornek, N., Staib, L. H., Ventola, P., & Duncan, J. S. (2020). Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results. Medical Image Analysis, 65, 101765. https: //doi.org/10.1016/j.media.2020.101765

[12]. Monika Dhananjay Rokade. (2024). Advancements in Privacy-Preserving Techniques for Federated Learning: A Machine Learning Perspective. Journal of Electrical Systems, 20(2s), 1075-1088. https: //doi.org/10.52783/jes.1754

[13]. Wu, C., Wu, F., Lyu, L., Huang, Y., & Xie, X. (2022). Communication-efficient federated learning via knowledge distillation. Nature Communications, 13(1). https: //doi.org/10.1038/s41467-022-29763-x

[14]. Karimir; eddy, S. P., Kale, S., Mohri, M., Reddi, S. J., Stich, S. U., & Suresh, A. T. (2020). SCAFFOLD: Stochastic controlled averaging for federated learning. Proceedings of the 37th International Conference on Machine Learning, 5132–5143. https: //proceedings.mlr.press/v119/karimireddy20a.html

[15]. Zhang, D., Xiao, M., & Skoglund, M. (2023). Over-the-Air Computation Empowered Federated Learning: A Joint Uplink-Downlink Design. ArXiv.org. https: //arxiv.org/abs/2311.04059

[16]. Nishio, T., & Yonetani, R. (2019). Client selection for federated learning with heterogeneous resources in mobile edge. Proceedings of the IEEE International Conference on Communications, 1-7. https: //ieeexplore.ieee.org/document/8761315

[17]. Wu, C., Wu, F., Lyu, L., Huang, Y., & Xie, X. (2022). Communication-efficient federated learning via knowledge distillation. Nature Communications, 13(1). https: //doi.org/10.1038/s41467-022-29763-x

Cite this article

Lu,Y. (2025). Evaluation of Recent Developments in Federated Learning. Applied and Computational Engineering,184,1-6.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of CONF-MLA 2025 Symposium: Intelligent Systems and Automation: AI Models, IoT, and Robotic Algorithms

ISBN: 978-1-80590-307-9(Print) / 978-1-80590-308-6(Online)
Editor: Hisham AbouGrad
Conference website: https://www.confmla.org/
Conference date: 17 November 2025
Series: Applied and Computational Engineering
Volume number: Vol.184
ISSN: 2755-2721(Print) / 2755-273X(Online)