kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 155) Show all publications
Skantze, G. & Irfan, B. (2025). Applying General Turn-Taking Models to Conversational Human-Robot Interaction. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 859-868). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Applying General Turn-Taking Models to Conversational Human-Robot Interaction
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 859-868Conference paper, Published paper (Refereed)
Abstract [en]

Turn-taking is a fundamental aspect of conversation, but current Human-Robot Interaction (HRI) systems often rely on simplistic, silence-based models, leading to unnatural pauses and interruptions. This paper investigates, for the first time, the application of general turn-taking models, specifically TurnGPT and Voice Activity Projection (VAP), to improve conversational dynamics in HRI. These models are trained on human-human dialogue data using self-supervised learning objectives, without requiring domain-specific fine-tuning. We propose methods for using these models in tandem to predict when a robot should begin preparing responses, take turns, and handle potential interruptions. We evaluated the proposed system in a within-subject study against a traditional baseline system, using the Furhat robot with 39 adults in a conversational setting, in combination with a large language model for autonomous response generation. The results show that participants significantly prefer the proposed system, and it significantly reduces response delays and interruptions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
conversational AI, human-robot interaction, large language model, turn-taking
National Category
Natural Language Processing Computer Sciences Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363767 (URN)10.1109/HRI61500.2025.10973958 (DOI)2-s2.0-105004876033 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250527

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-27Bibliographically approved
Irfan, B., Kuoppamäki, S., Hosseini, A. & Skantze, G. (2025). Between reality and delusion: challenges of applying large language models to companion robots for open-domain dialogues with older adults. Autonomous Robots, 49(1), Article ID 9.
Open this publication in new window or tab >>Between reality and delusion: challenges of applying large language models to companion robots for open-domain dialogues with older adults
2025 (English)In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 49, no 1, article id 9Article in journal (Refereed) Published
Abstract [en]

Throughout our lives, we interact daily in conversations with our friends and family, covering a wide range of topics, known as open-domain dialogue. As we age, these interactions may diminish due to changes in social and personal relationships, leading to loneliness in older adults. Conversational companion robots can alleviate this issue by providing daily social support. Large language models (LLMs) offer flexibility for enabling open-domain dialogue in these robots. However, LLMs are typically trained and evaluated on textual data, while robots introduce additional complexity through multi-modal interactions, which has not been explored in prior studies. Moreover, it is crucial to involve older adults in the development of robots to ensure alignment with their needs and expectations. Correspondingly, using iterative participatory design approaches, this paper exposes the challenges of integrating LLMs into conversational robots, deriving from 34 Swedish-speaking older adults' (one-to-one) interactions with a personalized companion robot, built on Furhat robot with GPT-\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-$$\end{document}3.5. These challenges encompass disruptions in conversations, including frequent interruptions, slow, repetitive, superficial, incoherent, and disengaging responses, language barriers, hallucinations, and outdated information, leading to frustration, confusion, and worry among older adults. Drawing on insights from these challenges, we offer recommendations to enhance the integration of LLMs into conversational robots, encompassing both general suggestions and those tailored to companion robots for older adults.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Large language models, Companion robot, Elderly care, Open-domain dialogue, Socially assistive robot, Participatory design
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-361621 (URN)10.1007/s10514-025-10190-y (DOI)001440005600001 ()2-s2.0-86000731912 (Scopus ID)
Note

QC 20250324

Available from: 2025-03-24 Created: 2025-03-24 Last updated: 2025-03-24Bibliographically approved
Irfan, B. & Skantze, G. (2025). Between You and Me: Ethics of Self-Disclosure in Human-Robot Interaction. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025 (pp. 1357-1362). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Between You and Me: Ethics of Self-Disclosure in Human-Robot Interaction
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1357-1362Conference paper, Published paper (Refereed)
Abstract [en]

As we move toward a future where robots are increasingly part of daily life, the privacy risks associated with interactions, particularly those relying on cloud-based large language models (LLMs), are becoming more pressing. Users may unknowingly share sensitive information in environments, such as homes or hospitals. To explore these risks, we conducted a study with 39 native English speakers using a Furhat robot with an integrated LLM. Participants discussed two moral dilemmas: (i) dishonesty, sharing personal stories of justified lying, and (ii) robot disobedience, discussing whether robots should disobey commands. On average, participants disclosed personal stories 45% of the time when asked in both scenarios. The main reason for non-disclosure was difficulty recalling examples quickly (33.3-56%), rather than reluctance to share (7.2-16%). However, most participants reported a lack of discomfort and concern about sharing personal information with the robot, indicating limited awareness of the privacy risks involved in such disclosures.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
ethics, human-robot interaction, large language model, moral dilemmas, privacy, self-disclosure
National Category
Human Computer Interaction Robotics and automation Ethics
Identifiers
urn:nbn:se:kth:diva-363756 (URN)10.1109/HRI61500.2025.10974215 (DOI)2-s2.0-105004876468 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025
Note

Part of ISBN 9798350378931

QC 20250525

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-25Bibliographically approved
Kamelabad, A. M., Inoue, E. & Skantze, G. (2025). Comparing Monolingual and Bilingual Social Robots as Conversational Practice Companions in Language Learning. In: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Melbourne, Australia, 4-6 March 2025 (pp. 829-838).
Open this publication in new window or tab >>Comparing Monolingual and Bilingual Social Robots as Conversational Practice Companions in Language Learning
2025 (English)In: Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, 2025, p. 829-838Conference paper, Published paper (Refereed)
Abstract [en]

This study explores the impact of monolingual and bilingual robots in Robot-Assisted Language Learning (RALL) for non-native Swedish learners. In a within-group design, 47 participants interacted with a social robot under two conditions: a monolingual robot that communicated exclusively in Swedish and a bilingual robot capable of switching between Swedish and English. Each participant engaged in multiple role-play scenarios designed to match their language proficiency levels, and their experiences were assessed through surveys and behavioral data. The results show that the bilingual robot was generally favored by participants, leading to a more relaxed, enjoyable experience. The perceived learning was improved at the end of the experiment regardless of the condition. These findings suggest that incorporating bilingual support in language-learning robots may enhance user engagement and effectiveness, particularly for lower-proficiency learners.

Keywords
robot assisted language learning, RALL, bilingual, monolingual, conversation practice, dialogue systems
National Category
Comparative Language Studies and Linguistics Robotics and automation Pedagogy
Research subject
Computer Science; Technology and Learning; Speech and Music Communication; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-362158 (URN)10.1109/HRI61500.2025.10973901 (DOI)2-s2.0-105004875905 (Scopus ID)
Conference
20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Melbourne, Australia, 4-6 March 2025
Projects
RALL-TMHRALL
Note

Part of ISBN 979-8-3503-7893-1

QC 20250506

Available from: 2025-05-05 Created: 2025-05-05 Last updated: 2025-05-21Bibliographically approved
Janssens, R., Pereira, A., Skantze, G., Irfan, B. & Belpaeme, T. (2025). Online Prediction of User Enjoyment in Human-Robot Dialogue with LLMs. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025 (pp. 1363-1367). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Online Prediction of User Enjoyment in Human-Robot Dialogue with LLMs
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 1363-1367Conference paper, Published paper (Refereed)
Abstract [en]

Large Language Models (LLMs) allow social robots to engage in unconstrained open-domain dialogue, but often make mistakes when employed in real-world interactions, requiring adaptation of LLMs to specific conversational contexts. However, LLM adaptation techniques require a feedback signal, ideally for multiple alternative utterances. At the same time, human-robot dialogue data is scarce and research often relies on external annotators. A tool for automatic prediction of user enjoyment in human-robot dialogue is therefore needed. We investigate the possibility of predicting user enjoyment turn-by-turn using an LLM, giving it a proposed robot utterance within the dialogue context, but without access to user response. We compare this performance to the system's enjoyment ratings when user responses are available and to assessments by expert human annotators, in addition to self-reported user perceptions. We evaluate the proposed LLM predictor in a human-robot interaction (HRI) dataset with conversation transcripts of 25 older adults' 7-minute dialogues with a companion robot. Our results show that an LLM is capable of predicting user enjoyment, without loss of performance despite the lack of user response and even achieving performance similar to that of human expert annotators. Furthermore, results show that the system surpasses expert annotators in its correlation with the user's self-reported perceptions of the conversation. This work presents a tool to remove the reliance on external annotators for enjoyment evaluation and paves the way toward real-time adaptation in human-robot dialogue.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
human-robot interaction, large language model, open-domain dialogue, prediction, user enjoyment
National Category
Natural Language Processing Computer Sciences Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-363754 (URN)10.1109/HRI61500.2025.10973944 (DOI)2-s2.0-105004873166 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, March 4-6, 2025
Note

Part of ISBN 9798350378931

QC 20250525

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-25Bibliographically approved
Mishra, C., Skantze, G., Hagoort, P. & Verdonschot, R. (2025). Perception of Emotions in Human and Robot Faces: Is the Eye Region Enough?. In: Social Robotics - 16th International Conference, ICSR + AI 2024, Proceedings: . Paper presented at 16th International Conference on Social Robotics, ICSR + AI 2024, Odense, Denmark, October 23-26, 2024 (pp. 290-303). Springer Nature
Open this publication in new window or tab >>Perception of Emotions in Human and Robot Faces: Is the Eye Region Enough?
2025 (English)In: Social Robotics - 16th International Conference, ICSR + AI 2024, Proceedings, Springer Nature , 2025, p. 290-303Conference paper, Published paper (Refereed)
Abstract [en]

The increased interest in developing next-gen social robots has raised questions about the factors affecting the perception of robot emotions. This study investigates the impact of robot appearances (human-like, mechanical) and face regions (full-face, eye-region) on human perception of robot emotions. A between-subjects user study (N = 305) was conducted where participants were asked to identify the emotions being displayed in videos of robot faces, as well as a human baseline. Our findings reveal three important insights for effective social robot face design in Human-Robot Interaction (HRI): Firstly, robots equipped with a back-projected, fully animated face – regardless of whether they are more human-like or more mechanical-looking – demonstrate a capacity for emotional expression comparable to that of humans. Secondly, the recognition accuracy of emotional expressions in both humans and robots declines when only the eye region is visible. Lastly, within the constraint of only the eye region being visible, robots with more human-like features significantly enhance emotion recognition.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Affective Robots, Design and Human Factors, Emotion Recognition, Emotional Robotics, Human-Robot Interaction, Posture and Facial Expressions
National Category
Robotics and automation Human Computer Interaction Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-362500 (URN)10.1007/978-981-96-3522-1_26 (DOI)2-s2.0-105002048335 (Scopus ID)
Conference
16th International Conference on Social Robotics, ICSR + AI 2024, Odense, Denmark, October 23-26, 2024
Note

Part of ISBN 9789819635214

QC 20250425

Available from: 2025-04-16 Created: 2025-04-16 Last updated: 2025-04-25Bibliographically approved
Borg, A., Georg, C., Jobs, B., Huss, V., Waldenlind, K., Ruiz, M., . . . Parodis, I. (2025). Virtual Patient Simulations Using Social Robotics Combined With Large Language Models for Clinical Reasoning Training in Medical Education: Mixed Methods Study. Journal of Medical Internet Research, 27, Article ID e63312.
Open this publication in new window or tab >>Virtual Patient Simulations Using Social Robotics Combined With Large Language Models for Clinical Reasoning Training in Medical Education: Mixed Methods Study
Show others...
2025 (English)In: Journal of Medical Internet Research, E-ISSN 1438-8871, Vol. 27, article id e63312Article in journal (Refereed) Published
Abstract [en]

Background: Virtual patients (VPs) are computer-based simulations of clinical scenarios used in health professions education to address various learning outcomes, including clinical reasoning (CR). CR is a crucial skill for health care practitioners, and its inadequacy can compromise patient safety. Recent advancements in large language models (LLMs) and social robots have introduced new possibilities for enhancing VP interactivity and realism. However, their application in VP simulations has been limited, and no studies have investigated the effectiveness of combining LLMs with social robots for CR training. Objective: The aim of the study is to explore the potential added value of a social robotic VP platform combined with an LLM compared to a conventional computer-based VP modality for CR training of medical students. Methods: A Swedish explorative proof-of-concept study was conducted between May and July 2023, combining quantitative and qualitative methodology. In total, 15 medical students from Karolinska Institutet and an international exchange program completed a VP case in a social robotic platform and a computer-based semilinear platform. Students' self-perceived VP experience focusing on CR training was assessed using a previously developed index, and paired 2-tailed t test was used to compare mean scores (scales from 1 to 5) between the platforms. Moreover, in-depth interviews were conducted with 8 medical students. Results: The social robotic platform was perceived as more authentic (mean 4.5, SD 0.7 vs mean 3.9, SD 0.5; odds ratio [OR] 2.9, 95% CI 0.0-1.0; P=.04) and provided a beneficial overall learning effect (mean 4.4, SD 0.6 versus mean 4.1, SD 0.6; OR 3.7, 95% CI 0.1-0.5; P=.01) compared with the computer-based platform. Qualitative analysis revealed 4 themes, wherein students experienced the social robot as superior to the computer-based platform in training CR, communication, and emotional skills. Limitations related to technical and user-related aspects were identified, and suggestions for improvements included enhanced facial expressions and VP cases simulating multiple personalities. Conclusions: A social robotic platform enhanced by an LLM may provide an authentic and engaging learning experience for medical students in the context of VP simulations for training CR. Beyond its limitations, several aspects of potential improvement were identified for the social robotic platform, lending promise for this technology as a means toward the attainment of learning outcomes within medical education curricula.

Place, publisher, year, edition, pages
JMIR Publications Inc., 2025
Keywords
clinical reasoning, large language models, medical education, medical students, social robotics, sustainable learning, virtual patients
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-361187 (URN)10.2196/63312 (DOI)001462239800002 ()40053778 (PubMedID)2-s2.0-85219722307 (Scopus ID)
Note

QC 20250519

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-05-27Bibliographically approved
Reimann, M. M., Hindriks, K. V., Kunneman, F. A., Oertel, C., Skantze, G. & Leite, I. (2025). What Can You Say to a Robot? Capability Communication Leads to More Natural Conversations. In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025 (pp. 708-716). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>What Can You Say to a Robot? Capability Communication Leads to More Natural Conversations
Show others...
2025 (English)In: HRI 2025 - Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2025, p. 708-716Conference paper, Published paper (Refereed)
Abstract [en]

When encountering a robot in the wild, it is not inherently clear to human users what the robot's capabilities are. When encountering misunderstandings or problems in spoken interaction, robots often just apologize and move on, without additional effort to make sure the user understands what happened. We set out to compare the effect of two speech based capability communication strategies (proactive, reactive) to a robot without such a strategy, in regard to the user's rating of and their behavior during the interaction. For this, we conducted an in-person user study with 120 participants who had three speech-based interactions with a social robot in a restaurant setting. Our results suggest that users preferred the robot communicating its capabilities proactively and adjusted their behavior in those interactions, using a more conversational interaction style while also enjoying the interaction more.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
dialogue management, Human-robot-interaction, spoken interaction, user study
National Category
Human Computer Interaction Robotics and automation Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-363764 (URN)10.1109/HRI61500.2025.10974151 (DOI)2-s2.0-105004876438 (Scopus ID)
Conference
20th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2025, Melbourne, Australia, Mar 4 2025 - Mar 6 2025
Note

Part of ISBN 9798350378931

QC 20250602

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-06-02Bibliographically approved
Blomsma, P., Vaitonyté, J., Skantze, G. & Swerts, M. (2024). Backchannel behavior is idiosyncratic. Language and Cognition, 16(4), 1158-1181
Open this publication in new window or tab >>Backchannel behavior is idiosyncratic
2024 (English)In: Language and Cognition, ISSN 1866-9808, Vol. 16, no 4, p. 1158-1181Article in journal (Refereed) Published
Abstract [en]

In spoken conversations, speakers and their addressees constantly seek and provide different forms of audiovisual feedback, also known as backchannels, which include nodding, vocalizations and facial expressions. It has previously been shown that addressees backchannel at specific points during an interaction, namely after a speaker provided a cue to elicit feedback from the addressee. However, addressees may differ in the frequency and type of feedback that they provide, and likewise, speakers may vary the type of cues they generate to signal the backchannel opportunity points (BOPs). Research on the extent to which backchanneling is idiosyncratic is scant. In this article, we quantify and analyze the variability in feedback behavior of 14 addressees who all interacted with the same speaker stimulus. We conducted this research by means of a previously developed experimental paradigm that generates spontaneous interactions in a controlled manner. Our results show that (1) backchanneling behavior varies between listeners (some addressees are more active than others) and (2) backchanneling behavior varies between BOPs (some points trigger more responses than others). We discuss the relevance of these results for models of human–human and human–machine interactions.

Place, publisher, year, edition, pages
Cambridge University Press (CUP), 2024
Keywords
backchannels, consensus sampling, head nod, listener feedback, multimodal, O-Cam paradigm
National Category
Natural Language Processing General Language Studies and Linguistics
Identifiers
urn:nbn:se:kth:diva-359142 (URN)10.1017/langcog.2024.1 (DOI)001169622300001 ()2-s2.0-85185760936 (Scopus ID)
Note

QC 20250127

Available from: 2025-01-27 Created: 2025-01-27 Last updated: 2025-02-27Bibliographically approved
Kamelabad, A. M., Engwall, O. & Skantze, G. (2024). Conformity and Trust in Multi-party vs. Individual Human-Robot Interaction. In: Rachael Jack, Mathieu Chollet, Ruth Aylett, Timothy Bickmore, Stacy Marsella, Gale Lucas (Ed.), Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents: . Paper presented at IVA '24: ACM International Conference on Intelligent Virtual Agents, GLASGOW United Kingdom, September 16-19, 2024. New York, NY United States: Association for Computing Machinery (ACM), Article ID 4.
Open this publication in new window or tab >>Conformity and Trust in Multi-party vs. Individual Human-Robot Interaction
2024 (English)In: Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents / [ed] Rachael Jack, Mathieu Chollet, Ruth Aylett, Timothy Bickmore, Stacy Marsella, Gale Lucas, New York, NY United States: Association for Computing Machinery (ACM) , 2024, article id 4Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we explored how conformity and trust vary in adolescent students’ interactions with a social robot. Specifically, we compared how this was influenced by whether the participants had individual or multi-party interaction with robot and whether the robot was portrayed as an adult or a child through appearance and voice. Our experiment involved 75 Swedish middle school students participating in a card sorting game with the Furhat robot, where the objective was to discuss and reach an agreement on the card sequence. The data analysis focused firstly on the participants’ willingness to rearrange cards following the robot’s suggestions and secondly their post-session subjective trust in the robot’s advice. Results indicated that individuals interacting with the robot individually were more likely to conform to its suggestions than those interacting with it together with a peer. Individuals interacting alone with the robot also showed higher post-session trust levels than those in multi-party settings, indicating group size impacts robot trustworthiness perceptions. However, the robot’s perceived age did not affect the level of conformity. Exploratory analyses also showed that mutual understanding was lower in the multi-party setting, while the child robot condition improved user experience, highlighting the complex influence of group dynamics and robot portrayal on human-robot interactions in education.

Place, publisher, year, edition, pages
New York, NY United States: Association for Computing Machinery (ACM), 2024
Keywords
Human-Robot Interaction, Conformity, Influential Agent, Multiparty Interaction, Child-robot Interaction, Education, Trust
National Category
Computer Sciences Human Computer Interaction Languages and Literature Sociology (Excluding Social Work, Social Anthropology, Demography and Criminology)
Research subject
Computer Science; Education and Communication in the Technological Sciences
Identifiers
urn:nbn:se:kth:diva-358521 (URN)10.1145/3652988.3673954 (DOI)001441957400004 ()2-s2.0-85215536524 (Scopus ID)
Conference
IVA '24: ACM International Conference on Intelligent Virtual Agents, GLASGOW United Kingdom, September 16-19, 2024
Projects
tmh_rall_convRALLe-laddaEarly Language Development in the Digital Age (e-LADDA)
Funder
EU, Horizon 2020, 857897
Note

Part of ISBN 9798400706257

QC 20250120

Available from: 2025-01-18 Created: 2025-01-18 Last updated: 2025-04-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8579-1790

Search in DiVA

Show all publications