Kumar, Akshi ORCID: https://orcid.org/0000-0003-4263-7168, Sharma, Kapil and Sharma, Aditi (2022) MEmoR : a multimodal emotion recognition using affective biomarkers for smart prediction of emotional health for people analytics in smart industries. Image and Vision Computing, 123. p. 104483. ISSN 0262-8856
|
Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (938kB) | Preview |
Abstract
The intersection of people, data and intelligent machines has a far-reaching impact on the productivity, efficiency and operations of a smart industry. Internet-of-things (IoT) offers a great potential for workplace gains using the “quantified self” and the computer vision strategies. Their goal is to focus on productivity, fitness, wellness, and improvement of the work environment. Recognizing and regulating human emotion is vital to people analytics as it plays an important role in the workplace productivity. Within the smart industry setting, various non-invasive IoT devices can be used to recognize emotions and study the behavioral outcomes in various situations. This research puts forward a deep learning model for detection of human emotional state in real-time using multimodal data from the Emotional Internet-of-things (E-IoT). The proposed multimodal emotion recognition model, MEmoR makes use of two data modalities: visual and psychophysiological. The video signals are sampled to obtain image frames and a ResNet50 model pre-trained for face recognition is fine-tuned for emotion classification. Simultaneously, a CNN is trained on the psychophysiological signals and the results of the two modality networks are combined using decision-level weighted fusion. The model is tested on the benchmark BioVid Emo DB multimodal dataset and compared to the state-of-the-art.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.