kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Torubarova, EkaterinaORCID iD iconorcid.org/0000-0001-5066-7186
Publications (3 of 3) Show all publications
Torubarova, E. (2025). Brain-Focused Multimodal Approach for Studying Conversational Engagement in HRI. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 1894-1896). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Brain-Focused Multimodal Approach for Studying Conversational Engagement in HRI
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1894-1896Conference paper, Published paper (Refereed)
Abstract [en]

My research adopts an interdisciplinary approach to study conversational engagement in human-robot interaction, by integrating cognitive neuroscience with multimodal behavioral measures and self-assessment, to provide a more comprehensive and objective evaluation of user experience. By utilizing brain imaging to analyze conversations, I aim to investigate the differences between interactions with humans and robots, as well as enhance our understanding of the cognitive mechanisms underlying communication. In addition to exploring variations in neural patterns for different agents, my work leverages multimodal machine learning to assess how brain imaging data, combined with other modalities such as eye tracking, audio, and video, can improve engagement detection, to ultimately design robots that can effectively detect, evaluate, and respond to user engagement, thereby facilitating more effective communication.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
brain imaging, conversational engagement, human-robot interaction, multimodal
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363765 (URN)10.1109/HRI61500.2025.10973818 (DOI)2-s2.0-105004872665 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250527

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-27Bibliographically approved
Torubarova, E., Arvidsson, C., Berrebi, J., Uddén, J. & Abelho Pereira, A. T. (2025). NeuroEngage: A Multimodal Dataset Integrating fMRI for Analyzing Conversational Engagement in Human-Human and Human-Robot Interactions. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 849-858). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>NeuroEngage: A Multimodal Dataset Integrating fMRI for Analyzing Conversational Engagement in Human-Human and Human-Robot Interactions
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 849-858Conference paper, Published paper (Refereed)
Abstract [en]

This study aimed to deepen our understanding of the behavioral and neurocognitive processes involved in human-human and human-robot communication in a more ecologically valid setting compared to the traditional neurolinguistic paradigms. We collected a novel open-source dataset (N=30 for human-human and N=20 for human-robot interactions), that includes fMRI, eye-tracking, segmented audio, video, and behavioral data, resulting in 30 minutes of free conversations per participant. To enable unrestricted, spontaneous robot behavior, we employed a novel VR-mediated teleoperation system. Our mixed design allowed us to compare participants' perception of humans and robots across three within-subject conditions of conversational engagement: Engaged Communicator, Active Listener, and Passive Listener. We provide an open-access dataset, replicable code for the teleoperation system, and an initial analysis of fMRI, behavioral, and speech data. We observed distinct neural profiles: speaking to the human agent recruited more higher-level frontal regions associated with socio-pragmatic processes, while listening to the robot recruited more sensory areas, including auditory and visual regions. Engagement levels and agent types also affected speech and behavioral patterns, offering valuable insights into conversational dynamics in human-human and human-robot interactions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
conversation, dataset, engagement, fMRI, human-robot interaction, neuroimaging
National Category
Human Computer Interaction Robotics and automation Computer Sciences Natural Language Processing Psychology (Excluding Applied Psychology)
Identifiers
urn:nbn:se:kth:diva-363755 (URN)10.1109/HRI61500.2025.10974251 (DOI)2-s2.0-105004876905 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

QC 20250527

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-27Bibliographically approved
Arvidsson, C., Torubarova, E., Abelho Pereira, A. T. & Udden, J. (2024). Conversational production and comprehension: fMRI-evidence reminiscent of but deviant from the classical Broca-Wernicke model. Cerebral Cortex, 34(3), Article ID bhae073.
Open this publication in new window or tab >>Conversational production and comprehension: fMRI-evidence reminiscent of but deviant from the classical Broca-Wernicke model
2024 (English)In: Cerebral Cortex, ISSN 1047-3211, E-ISSN 1460-2199, Vol. 34, no 3, article id bhae073Article in journal (Refereed) Published
Abstract [en]

A key question in research on the neurobiology of language is to which extent the language production and comprehension systems share neural infrastructure, but this question has not been addressed in the context of conversation. We utilized a public fMRI dataset where 24 participants engaged in unscripted conversations with a confederate outside the scanner, via an audio-video link. We provide evidence indicating that the two systems share neural infrastructure in the left-lateralized perisylvian language network, but diverge regarding the level of activation in regions within the network. Activity in the left inferior frontal gyrus was stronger in production compared to comprehension, while comprehension showed stronger recruitment of the left anterior middle temporal gyrus and superior temporal sulcus, compared to production. Although our results are reminiscent of the classical Broca-Wernicke model, the anterior (rather than posterior) temporal activation is a notable difference from that model. This is one of the findings that may be a consequence of the conversational setting, another being that conversational production activated what we interpret as higher-level socio-pragmatic processes. In conclusion, we present evidence for partial overlap and functional asymmetry of the neural infrastructure of production and comprehension, in the above-mentioned frontal vs temporal regions during conversation.

Place, publisher, year, edition, pages
Oxford University Press (OUP), 2024
Keywords
interaction, contextual language processing, LIFG, LMTG, functional asymmetry
National Category
Languages and Literature
Identifiers
urn:nbn:se:kth:diva-351444 (URN)10.1093/cercor/bhae073 (DOI)001273703700001 ()38501383 (PubMedID)2-s2.0-85188194135 (Scopus ID)
Note

QC 20240815

Available from: 2024-08-15 Created: 2024-08-15 Last updated: 2024-08-15Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5066-7186

Search in DiVA

Show all publications