e-space
Manchester Metropolitan University's Research Repository

    Bridging Accuracy and Explainability: A SHAP-Enhanced CNN for Skin Cancer Diagnosis

    Roy, Shudipta ORCID logoORCID: https://orcid.org/0009-0000-8850-2846, Fan, Xinqi ORCID logoORCID: https://orcid.org/0000-0002-8025-016X, Alam, Nashid ORCID logoORCID: https://orcid.org/0000-0001-6488-8473, Chen, Xueli ORCID logoORCID: https://orcid.org/0000-0002-7928-6392, Qureshi, Rizwan ORCID logoORCID: https://orcid.org/0000-0002-0039-982X, Wu, Jia ORCID logoORCID: https://orcid.org/0000-0001-8392-8338 and Yap, Moi Hoon ORCID logoORCID: https://orcid.org/0000-0001-7681-4287 (2026) Bridging Accuracy and Explainability: A SHAP-Enhanced CNN for Skin Cancer Diagnosis. In: Medical Image Understanding and Analysis, pp. 72-86. Presented at Medical Image Understanding and Analysis 2025, 15 July - 17 July 2025, University of Leeds, UK.

    [img] Accepted Version
    File will be available on: 15 July 2026.
    Available under License In Copyright.

    Download (4MB)

    Abstract

    Early detection of melanoma, the most lethal form of skin cancer, can greatly enhance patient survival rates. Although AI models have demonstrated strong diagnostic capabilities, their integration into clinical practice remains limited due to concerns over explainability and trust. This work proposes a SHAP-enhanced Convolutional Neural Network (SCNN) for binary classification of skin lesions into melanoma and non-melanoma categories, directly integrating Shapley Additive Explanations (SHAP) as an additional input channel to enhance performance and explainability. We evaluated SCNN on the ISIC 2017 and ISIC 2018 datasets, achieving ROC-AUC scores of 0.80 and 0.91, respectively. These results indicate substantial improvements in classification accuracy and robustness compared to baseline models. An analysis of model explainability on the ISIC 2017 dataset reveals that SCNN more accurately highlights lesion areas identified by experts, achieving a mean Intersection-over-Union score of 0.34, which marginally improved the baseline score of 0.32. 53% of the correct melanoma predictions made by the SCNN model were based on clinically relevant regions, compared to only 40% for the baseline model. Qualitative evaluations via Grad-CAM visualisations further confirmed that SCNN prioritised medically meaningful features, such as lesion asymmetry and border irregularities. These results demonstrate that integrating explainability into model training can enhance transparency without compromising performance, thereby gaining more trust from clinicians.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    3Downloads
    6 month trend
    11Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record