kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NeuroEngage: A Multimodal Dataset Integrating fMRI for Analyzing Conversational Engagement in Human-Human and Human-Robot Interactions
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0001-5066-7186
Stockholm University, Department of Linguistics, Stockholm, Sweden.
Stockholm University, Stockholm University Brain Imaging Centre, Stockholm, Sweden.
University Stockholm, Department of Psychology, Department of Linguistics Stockholm, Sweden.
Show others and affiliations
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 849-858Conference paper, Published paper (Refereed)
Abstract [en]

This study aimed to deepen our understanding of the behavioral and neurocognitive processes involved in human-human and human-robot communication in a more ecologically valid setting compared to the traditional neurolinguistic paradigms. We collected a novel open-source dataset (N=30 for human-human and N=20 for human-robot interactions), that includes fMRI, eye-tracking, segmented audio, video, and behavioral data, resulting in 30 minutes of free conversations per participant. To enable unrestricted, spontaneous robot behavior, we employed a novel VR-mediated teleoperation system. Our mixed design allowed us to compare participants' perception of humans and robots across three within-subject conditions of conversational engagement: Engaged Communicator, Active Listener, and Passive Listener. We provide an open-access dataset, replicable code for the teleoperation system, and an initial analysis of fMRI, behavioral, and speech data. We observed distinct neural profiles: speaking to the human agent recruited more higher-level frontal regions associated with socio-pragmatic processes, while listening to the robot recruited more sensory areas, including auditory and visual regions. Engagement levels and agent types also affected speech and behavioral patterns, offering valuable insights into conversational dynamics in human-human and human-robot interactions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. p. 849-858
Keywords [en]
conversation, dataset, engagement, fMRI, human-robot interaction, neuroimaging
National Category
Human Computer Interaction Robotics and automation Computer Sciences Natural Language Processing Psychology (Excluding Applied Psychology)
Identifiers
URN: urn:nbn:se:kth:diva-363755DOI: 10.1109/HRI61500.2025.10974251Scopus ID: 2-s2.0-105004876905OAI: oai:DiVA.org:kth-363755DiVA, id: diva2:1959850
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

QC 20250527

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-27Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Torubarova, EkaterinaAbelho Pereira, André Tiago

Search in DiVA

By author/editor
Torubarova, EkaterinaAbelho Pereira, André Tiago
By organisation
Speech, Music and Hearing, TMH
Human Computer InteractionRobotics and automationComputer SciencesNatural Language ProcessingPsychology (Excluding Applied Psychology)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 96 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf