kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Identification of Low-engaged Learners in Robot-led Second Language Conversations with Adults
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-4532-014X
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-4472-4732
Heriot Watt Univ, Edinburgh, Midlothian, Scotland..
KTH.
Show others and affiliations
2022 (English)In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 11, no 2, article id 18Article in journal (Refereed) Published
Abstract [en]

The main aim of this study is to investigate if verbal, vocal, and facial information can be used to identify low-engaged second language learners in robot-led conversation practice. The experiments were performed on voice recordings and video data from 50 conversations, in which a robotic head talks with pairs of adult language learners using four different interaction strategies with varying robot-learner focus and initiative. It was found that these robot interaction strategies influenced learner activity and engagement. The verbal analysis indicated that learners with low activity rated the robot significantly lower on two out of four scales related to social competence. The acoustic vocal and video-based facial analysis, based on manual annotations or machine learning classification, both showed that learners with low engagement rated the robot's social competencies consistently, and in several cases significantly, lower, and in addition rated the learning effectiveness lower. The agreement between manual and automatic identification of low-engaged learners based on voice recordings or face videos was further found to be adequate for future use. These experiments constitute a first step towards enabling adaption to learners' activity and engagement through within- and between-strategy changes of the robot's interaction with learners.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2022. Vol. 11, no 2, article id 18
Keywords [en]
Robot-assisted language learning, user engagement, speech emotion recognition, facial emotion expressions
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-311036DOI: 10.1145/3503799ISI: 000774332200008Scopus ID: 2-s2.0-85127492914OAI: oai:DiVA.org:kth-311036DiVA, id: diva2:1652977
Note

Not duplicate with DiVA 1612730

QC 20220420

Available from: 2022-04-20 Created: 2022-04-20 Last updated: 2023-01-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Engwall, OlovCumbal, Ronald

Search in DiVA

By author/editor
Engwall, OlovCumbal, RonaldLjung, MikaelMånsson, Linnea
By organisation
Speech, Music and Hearing, TMHKTH
In the same journal
ACM Transactions on Human-Robot Interaction
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 157 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf