Sabah, Fahad ORCID: https://orcid.org/0000-0003-1558-2616, Chen, Yuwen ORCID: https://orcid.org/0000-0001-6414-9697, Yang, Zhen ORCID: https://orcid.org/0009-0007-5182-0892, Azam, Muhammad ORCID: https://orcid.org/0000-0001-6723-9401, Ahmad, Nadeem ORCID: https://orcid.org/0000-0001-6894-5703 and Sarwar, Raheem ORCID: https://orcid.org/0000-0002-0640-807X (2024) Model optimization techniques in personalized federated learning: a survey. Expert Systems with Applications, 243. 122874. ISSN 0957-4174
|
Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (6MB) | Preview |
Abstract
Personalized federated learning (PFL) is an exciting approach that allows machine learning (ML) models to be trained on diverse and decentralized sources of data, while maintaining client privacy and autonomy. However, PFL faces several challenges that can deteriorate the performance and effectiveness of the learning process. These challenges include data heterogeneity, communication overhead, model privacy, model drift, client heterogeneity, label noise and imbalance, federated optimization challenges, and client participation and engagement. To address these challenges, researchers are exploring innovative techniques and algorithms that can enable efficient and effective PFL. These techniques include several optimization algorithms. This research survey provides an overview of the challenges and motivations related to the model optimization strategies for PFL, as well as the state-of-the-art (SOTA) methods and algorithms which seek to provide solutions of these challenges. Overall, this survey can be a valuable resource for researchers who are interested in the emerging field of PFL as well as its potential for personalized machine learning in a federated environment.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.