Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Multi-Modal Affective State Detection For Dyadic Interactions Using Thermal Imaging and Context
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0001-5660-5330
2025 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Until recently, most robotic systems have operated with limited emotional intelligence, primarily responding to pre-programmed cues rather than adapting to human emotional states. Thus, affect recognition in humanrobot-interaction remains a significant challenge and is twofold: robots must not only detect emotional expressions but they also need to interpret them within their social context, requiring systems that are capable of collecting information from its surrounding, analyzing it and thereafter generalizing across different interaction scenarios and cultural contexts to handle more complex situations.

This thesis tackles affect recognition using multi-modal approaches that combine thermal imaging, facial expression analysis, and contextual understanding. Thermal imaging offers unique insights into physiological responses associated with emotional states, complementing traditional vision-based approaches while maintaining non-contact operation. The integration of thermal imaging, facial expression analysis, and contextual understanding creates a comprehensive multi-modal framework that addresses the key challenges in affect recognition, such as varying lighting conditions, occlusions, and ambiguous emotional expressions. This combination provides complementary information streams that enhance robustness in real-world environments, making it an effective case study for developing context-aware emotional intelligence in robotics.

We introduce a novel context-aware transformer architecture that processes multiple data streams while maintaining temporal relationships and contextual understanding. Each modality contributes complementary information about the user’s emotional state, while the context processing ensures situation-appropriate interpretation. For instance, distinguishing between a smile indicating enjoyment during collaborative tasks versus one masking nervousness in stressful situations. This contextual awareness is crucial for appropriate robot responses in real-world deployments.

The research contributions span four areas: (1) developing robust thermal feature extraction techniques that capture subtle emotional responses (2) creating a transformer-based architecture for multi-modal fusion that effectively incorporates situational information, (3) implementing real-time processing pipelines that enable practical deployment in human-robot interaction scenarios, and (4) validating these approaches through extensive real-world interaction studies. Results show improved recognition accuracy from 77% using traditional approaches to 89% with our context-aware multi-modal system, demonstrating the ability to understand and appropriately respond to human emotions in dynamic social situations.

sted, utgiver, år, opplag, sider
Stockholm: KTH Royal Institute of Technology, 2025. , s. x, 56
Serie
TRITA-EECS-AVL ; 2025:74
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-368995ISBN: 9789181063431 (tryckt)OAI: oai:DiVA.org:kth-368995DiVA, id: diva2:1991852
Disputas
2025-09-26, D37, Lindstedtsvägen 9, Stockholm, 13:00 (engelsk)
Opponent
Veileder
Merknad

QC 20250905

Tilgjengelig fra: 2025-09-05 Laget: 2025-08-25 Sist oppdatert: 2025-09-29bibliografisk kontrollert
Delarbeid
1. Automatic Frustration Detection Using Thermal Imaging
Åpne denne publikasjonen i ny fane eller vindu >>Automatic Frustration Detection Using Thermal Imaging
Vise andre…
2022 (engelsk)Inngår i: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 451-460Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

To achieve seamless interactions, robots have to be capable of reliably detecting affective states in real time. One of the possible states that humans go through while interacting with robots is frustration. Detecting frustration from RGB images can be challenging in some real-world situations; thus, we investigate in this work whether thermal imaging can be used to create a model that is capable of detecting frustration induced by cognitive load and failure. To train our model, we collected a data set from 18 participants experiencing both types of frustration induced by a robot. The model was tested using features from several modalities: thermal, RGB, Electrodermal Activity (EDA), and all three combined. When data from both frustration cases were combined and used as training input, the model reached an accuracy of 89% with just RGB features, 87% using only thermal features, 84% using EDA, and 86% when using all modalities. Furthermore, the highest accuracy for the thermal data was reached using three facial regions of interest: nose, forehead and lower lip.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2022
Serie
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Emneord
Human-robot interaction, Thermal imaging, Frustration, cognitive load, Action units
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-322478 (URN)10.1109/HRI53351.2022.9889545 (DOI)000869793600050 ()2-s2.0-85140750883 (Scopus ID)
Konferanse
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Merknad

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20221216

Tilgjengelig fra: 2022-12-16 Laget: 2022-12-16 Sist oppdatert: 2025-08-25bibliografisk kontrollert
2. Multi-modal Affect Detection Using Thermal and Optical Imaging in a Gamified Robotic Exercise
Åpne denne publikasjonen i ny fane eller vindu >>Multi-modal Affect Detection Using Thermal and Optical Imaging in a Gamified Robotic Exercise
2024 (engelsk)Inngår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 16, nr 5, s. 981-997Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Affect recognition, or the ability to detect and interpret emotional states, has the potential to be a valuable tool in the field of healthcare. In particular, it can be useful in gamified therapy, which involves using gaming techniques to motivate and keep the engagement of patients in therapeutic activities. This study aims to examine the accuracy of machine learning models using thermal imaging and action unit data for affect classification in a gamified robot therapy scenario. A self-report survey and three machine learning models were used to assess emotions including frustration, boredom, and enjoyment in participants during different phases of the game. The results showed that the multimodal approach with the combination of thermal imaging and action units with LSTM model had the highest accuracy of 77% for emotion classification over a 7-s sliding window, while thermal imaging had the lowest standard deviation among participants. The results suggest that thermal imaging and action units can be effective in detecting affective states and might have the potential to be used in healthcare applications, such as gamified therapy, as a promising non-intrusive method for recognizing internal states.

sted, utgiver, år, opplag, sider
Springer Nature, 2024
Emneord
Action units, Emotionally aware systems, Frustration, Human–robot interaction, Multi-modal affect recognition, Thermal imaging
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-350001 (URN)10.1007/s12369-023-01066-1 (DOI)001090565600001 ()2-s2.0-85175291284 (Scopus ID)
Merknad

QC 20260127

Tilgjengelig fra: 2024-07-05 Laget: 2024-07-05 Sist oppdatert: 2026-01-27bibliografisk kontrollert
3. Context Matters: Understanding Socially Appropriate Affective Responses Via Sentence Embeddings
Åpne denne publikasjonen i ny fane eller vindu >>Context Matters: Understanding Socially Appropriate Affective Responses Via Sentence Embeddings
Vise andre…
2025 (engelsk)Inngår i: Social Robotics - 16th International Conference, ICSR + AI 2024, Proceedings, Springer Nature , 2025, s. 78-91Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

As AI systems increasingly engage in social interactions, comprehending human social dynamics is crucial. Affect recognition enables systems to respond appropriately to emotional nuances in social situations. However, existing multimodal approaches lack accounting for the social appropriateness of detected emotions within their contexts. This paper presents a novel methodology leveraging sentence embeddings to distinguish socially appropriate and inappropriate interactions for more context-aware AI systems. Our approach measures the semantic distance between facial expression descriptions and predefined reference points. We evaluate our method using a benchmark dataset and a real-world robot deployment in a library, combining GPT-4(V) for expression descriptions and ada-2 for sentence embeddings to detect socially inappropriate interactions. Our results underscore the importance of considering contextual factors for effective social interaction understanding through context-aware affect recognition, contributing to the development of socially intelligent AI capable of interpreting and responding to human affect appropriately.

sted, utgiver, år, opplag, sider
Springer Nature, 2025
Emneord
embeddings, human-robot interaction, machine learning, Social representation
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-362501 (URN)10.1007/978-981-96-3522-1_9 (DOI)001531722800009 ()2-s2.0-105002016733 (Scopus ID)
Konferanse
16th International Conference on Social Robotics, ICSR + AI 2024, Odense, Denmark, October 23-26, 2024
Merknad

Part of ISBN 9789819635214

QC 20250428

Tilgjengelig fra: 2025-04-16 Laget: 2025-04-16 Sist oppdatert: 2025-12-08bibliografisk kontrollert
4. Fusion in Context: A Multimodal Approach to Affective State Recognition
Åpne denne publikasjonen i ny fane eller vindu >>Fusion in Context: A Multimodal Approach to Affective State Recognition
Vise andre…
2025 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-369160 (URN)
Konferanse
34th IEEE International Conference on Robot and Human Interactive Communication, Eindhoven University of Technology, Eindhoven, The Netherlands, Aug 25-29, 2025.
Merknad

QC 20250905

Tilgjengelig fra: 2025-08-29 Laget: 2025-08-29 Sist oppdatert: 2025-09-05bibliografisk kontrollert
5. Are You an Expert? Instruction Adaptation Using Multi-Modal Affect Detections with Thermal Imaging and Context
Åpne denne publikasjonen i ny fane eller vindu >>Are You an Expert? Instruction Adaptation Using Multi-Modal Affect Detections with Thermal Imaging and Context
Vise andre…
2025 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-369162 (URN)
Konferanse
IEEE International Conference on Robot and Human Interactive Communication, Eindhoven University of Technology, Eindhoven, The Netherlands, Aug 25-29, 2025.
Tilgjengelig fra: 2025-08-29 Laget: 2025-08-29 Sist oppdatert: 2025-09-05bibliografisk kontrollert

Open Access i DiVA

fulltext(4170 kB)86 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 4170 kBChecksum SHA-512
4cfb132891018bc08605881599c8dd122e9942aa12538aa4768d26a7fd1d961c19edc25f78c05d0af8d289e32f99cd1f89589ec531e17a46bdbe3a2602e022e2
Type fulltextMimetype application/pdf

Person

Mohamed, Youssef

Søk i DiVA

Av forfatter/redaktør
Mohamed, Youssef
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 86 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 569 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf