e-space
Manchester Metropolitan University's Research Repository

    COVID-19 Detection from Chest X-Ray images using Feature Fusion and Deep learning

    Alam, Nur-A, Ahsan, Md Mominul, Based, Md Abdul, Haider, Julfikar ORCID logoORCID: https://orcid.org/0000-0001-7010-8285 and Kowalski, Marcin (2021) COVID-19 Detection from Chest X-Ray images using Feature Fusion and Deep learning. Sensors, 21 (4). ISSN 1424-8220

    [img]
    Preview
    Published Version
    Available under License Creative Commons Attribution.

    Download (6MB) | Preview

    Abstract

    Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient’s death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    553Downloads
    6 month trend
    269Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record