e-space
Manchester Metropolitan University's Research Repository

    Privacy-Preserving Federated Learning of Remote Sensing Image Classification With Dishonest Majority

    Zhu, Jiang, Wu, Jun ORCID logoORCID: https://orcid.org/0000-0003-2483-6980, Bashir, Ali Kashif ORCID logoORCID: https://orcid.org/0000-0003-2601-9327, Pan, Qianqian ORCID logoORCID: https://orcid.org/0000-0001-8093-3377 and Yang, Wu (2023) Privacy-Preserving Federated Learning of Remote Sensing Image Classification With Dishonest Majority. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 16. pp. 4685-4698. ISSN 1939-1404

    [img]
    Preview
    Published Version
    Available under License Creative Commons Attribution.

    Download (3MB) | Preview

    Abstract

    The classification of remote sensing images can give valuable data for various practical applications for smart cities, including urban planning, construction, and water resource management. The federated learning (FL) solution is often adopted to resolve the problems of limited resources and the confidentiality of data in remote sensing image classification. Privacy-preserving federated learning (PPFL) is a state-of-art FL scheme tailored for the privacy-constrained situation. It is required to address safeguarding data privacy and optimizing model accuracy effectively. However, existing PPFL methods usually suffer from model poisoning attacks, especially in the case of dishonest-majority scenarios. To address this challenge, in this work, we propose a blockchain-empowered PPFL for remote sensing image classification framework with the poisonous dishonest majority, which is able to defend against encrypted model poisoning attacks without compromising users' privacy. Specifically, we first propose the method of proof of accuracy (PoA) aiming to evaluate the encrypted models in an authentic way. Then, we design the secure aggregation framework using PoA, which can achieve robustness in a majority proportion of adversary settings. The experimental results show that our scheme can reach 92.5%, 90.61%, 87.48%, and 81.84% accuracy when the attacker accounts for 20%, 40%, 60%, and 80%, respectively. This is consistent with the FedAvg accuracy when only benign clients own the corresponding proportion of data. The experiment results demonstrate the proposed scheme's superiority in defending against model poisoning attacks.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    0Downloads
    6 month trend
    3Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record