e-space
Manchester Metropolitan University's Research Repository

    Sibling Discrimination Using Linear Fusion on Deep Learning Face Recognition Models

    Goel, Rita ORCID logoORCID: https://orcid.org/0009-0002-5290-686X, Alamgir, Maida, Wahab, Haroon, Alamgir, Maria, Mehmood, Irfan, Ugail, Hassan and Sinha, Amit (2024) Sibling Discrimination Using Linear Fusion on Deep Learning Face Recognition Models. Journal of Informatics and Web Engineering, 3 (3). pp. 214-232. ISSN 2821-370X

    [img]
    Preview
    Published Version
    Available under License Creative Commons Attribution Non-commercial No Derivatives.

    Download (1MB) | Preview

    Abstract

    Facial recognition technology has revolutionised human identification, providing a non-invasive alternative to traditional biometric methods like signatures and voice recognition. The integration of deep learning has significantly enhanced the accuracy and adaptability of these systems, now widely used in criminal identification, access control, and security. Initial research focused on recognising full-frontal facial features, but recent advancements have tackled the challenge of identifying partially visible faces, a scenario that often reduces recognition accuracy. This study aims to identify siblings based on facial features, particularly in cases where only partial features like eyes, nose, or mouth are visible. Utilising advanced deep learning models such as VGG19, VGG16, VGGFace, and FaceNet, the research introduces a framework to differentiate between sibling images effectively. To boost discrimination accuracy, the framework employs a linear fusion technique that merges insights from all the models used. The methodology involves preprocessing image pairs, extracting embeddings with pre-trained models, and integrating information through linear fusion. Evaluation metrics, including confusion matrix analysis, assess the framework's robustness and precision. Custom datasets of cropped sibling facial areas form the experimental basis, testing the models under various conditions like different facial poses and cropped regions. Model selection emphasises accuracy and extensive training on large datasets to ensure reliable performance in distinguishing subtle facial differences. Experimental results show that combining multiple models' outputs using linear fusion improves the accuracy and realism of sibling discrimination based on facial features. Findings indicate a minimum accuracy of 96% across different facial regions. Although this is slightly lower than the accuracy achieved by a single model like VGG16 with full-frontal poses, the fusion approach provides a more realistic outcome by incorporating insights from all four models. This underscores the potential of advanced deep learning techniques in enhancing facial recognition systems for practical applications.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    34Downloads
    6 month trend
    19Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record