kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
“You don’t understand me!”: Comparing ASR Results for L1 and L2 Speakers of Swedish
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-4472-4732
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8773-9216
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-4532-014X
2021 (English)In: Proceedings Interspeech 2021, International Speech Communication Association , 2021, p. 96-100Conference paper, Published paper (Refereed)
Abstract [en]

The performance of Automatic Speech Recognition (ASR)systems has constantly increased in state-of-the-art develop-ment. However, performance tends to decrease considerably inmore challenging conditions (e.g., background noise, multiplespeaker social conversations) and with more atypical speakers(e.g., children, non-native speakers or people with speech dis-orders), which signifies that general improvements do not nec-essarily transfer to applications that rely on ASR, e.g., educa-tional software for younger students or language learners. Inthis study, we focus on the gap in performance between recog-nition results for native and non-native, read and spontaneous,Swedish utterances transcribed by different ASR services. Wecompare the recognition results using Word Error Rate and an-alyze the linguistic factors that may generate the observed tran-scription errors.

Place, publisher, year, edition, pages
International Speech Communication Association , 2021. p. 96-100
Series
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, ISSN 2308-457X
Keywords [en]
automatic speech recognition, non-native speech, language learning
National Category
Other Engineering and Technologies
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-313355DOI: 10.21437/Interspeech.2021-2140ISI: 000841879504109Scopus ID: 2-s2.0-85119499427OAI: oai:DiVA.org:kth-313355DiVA, id: diva2:1663482
Conference
22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021, Brno, 30 August 2021, through 3 September 2021
Projects
Collaborative Robot Assisted Language Learning
Note

QC 20221108

Part of proceedings: ISBN 978-171383690-2

Available from: 2022-06-02 Created: 2022-06-02 Last updated: 2025-02-18Bibliographically approved
In thesis
1. Robots Beyond Borders: The Role of Social Robots in Spoken Second Language Practice
Open this publication in new window or tab >>Robots Beyond Borders: The Role of Social Robots in Spoken Second Language Practice
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Robotar bortom gränser : Sociala robotars roll i talat andraspråk
Abstract [en]

This thesis investigates how social robots can support adult second language (L2) learners in improving conversational skills. It recognizes the challenges inherent in adult L2 learning, including increased cognitive demands and the unique motivations driving adult education. While social robots hold potential for natural interactions and language education, research into conversational skill practice with adult learners remains underexplored. Thus, the thesis contributes to understanding these conversational dynamics, enhancing speaking practice, and examining cultural perspectives in this context.

To begin, this thesis investigates robot-led conversations with L2 learners, examining how learners respond to moments of uncertainty. The research reveals that when faced with uncertainty, learners frequently seek clarification, yet many remain unresponsive. As a result, effective strategies are required from robot conversational partners to address this challenge. These interactions are then used to evaluate the performance of off-the-shelf Automatic Speech Recognition (ASR) systems. The assessment highlights that speech recognition for L2 speakers is not as effective as for L1 speakers, with performance deteriorating for both groups during social conversations. Addressing these challenges is imperative for the successful integration of robots in conversational practice with L2 learners.

The thesis then explores the potential advantages of employing social robots in collaborative learning environments with multi-party interactions. It delves into strategies for improving speaking practice, including the use of non-verbal behaviors to encourage learners to speak. For instance, a robot's adaptive gazing behavior is used to effectively balance speaking contributions between L1 and L2 pairs of participants. Moreover, an adaptive use of encouraging backchannels significantly increases the speaking time of L2 learners.

Finally, the thesis highlights the importance of further research on cultural aspects in human-robot interactions. One study reveals distinct responses among various socio-cultural groups in interaction between L1 and L2 participants. For example, factors such as gender, age, extroversion, and familiarity with robots influence conversational engagement of L2 speakers. Additionally, another study investigates preconceptions related to the appearance and accents of nationality-encoded (virtual and physical) social robots. The results indicate that initial perceptions may lead to negative preconceptions, but that these perceptions diminish after actual interactions.

Despite technical limitations, social robots provide distinct benefits in supporting educational endeavors. This thesis emphasizes the potential of social robots as effective facilitators of spoken language practice for adult learners, advocating for continued exploration at the intersection of language education, human-robot interaction, and technology.

Abstract [sv]

Denna avhandling undersöker hur sociala robotar kan ge vuxna andraspråks\-inlärare stöd att förbättra sin konversationsförmåga på svenska. Andraspråks\-inlärning för vuxna, särskilt i migrationskontext, är mer komplext än för barn, bland annat på grund av att förutsättningarna för språkinlärning försämras med åren och att drivkrafterna ofta är andra. Sociala robotar har stor potential inom språkundervisning för att träna naturliga samtal, men fortfarande har lite forskning om hur robotar kan öva konversation med vuxna elever genomförts. Därför bidrar avhandlingen till att förstå samtal mellan andraspråksinlärare och robotar, förbättra dessa samtalsövningar och undersöka hur kulturella faktorer påverkar interaktionen.

Till att börja med undersöker avhandlingen hur andraspråkselever reagerar då de blir förbryllade eller osäkra i robotledda konversationsövningar. Resultaten visar att eleverna ofta försöker få roboten att ge förtydliganden när de är osäkra, men att de ibland helt enkelt inte svarar något alls, vilket innebär att roboten behöver kunna hantera sådana situationer. Konversationerna mellan andraspråksinlärare och en robot har även använts för att undersöka hur väl ledande system för taligenkänning kan tolka det adraspråkstalare säger. Det kan konstateras att systemen har väsentligt större svårigheter att känna igen andraspråkstalare än personer med svensk bakgrund, samt att de har utmananingar att tolka såväl svenska talare som andraspråkselever i friare sociala konversationer, vilket måste hanteras när robotar ska användas i samtalsövningar med andraspråkselever.

Avhandlingen undersöker sedan strategier för att uppmuntra andraspråks\-elever att prata mer och för att fördela ordet jämnare i trepartsövningar där två personer samtalar med roboten. Strategierna går ut på att modifiera hur roboten tittar på de två personerna eller ger icke-verbal återkoppling (hummanden) för att signalera förståelse och intresse för det eleverna säger.

Slutligen belyser avhandlingen vikten av ytterligare forskning om kulturella aspekter i interaktioner mellan människa och robot. En studie visar att faktorer som kön, ålder, tidigare erfarenhet av robotar och hur extrovert eleven är påverkar både hur mycket olika personer talar och hur de svarar på robotens försök att uppmuntra dem att tala mer genom icke-verbala signaler.

En andra studie undersöker om och hur förutfattade meningar relaterade till utseende och uttal påverkar hur människor uppfattar (virtuella och fysiska) sociala robotar som givits egenskaper (röst och ansikte) som kan kopplas till olika nationella bakgrunder. Resultaten visar att människors första intryck av en kulturellt färgad robot speglar förutfattade meningar, men att denna uppfattning inte alls får samma genomslag när personer faktiskt interagerat med roboten i ett realistiskt sammanhang.

En huvudsaklig slutsats i avhandlingen är att sociala robotar, trots att tekniska begränsningar finns kvar, har tydliga fördelar som kan utnyttjas inom utbildning. Specifikt betonar avhandlingen potentialen hos sociala robotar att leda samtalsövningar för vuxna andraspråkselever och förespråkar fortsatt forskning i skärningspunkten mellan språkundervisning, människa-robotinteraktion och teknik.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. 91
Series
TRITA-EECS-AVL ; 2024:23
Keywords
Conversations, gaze, backchannels, multi-party, accent, culture, Samtal, blick, återkoppling, gruppdynamik, brytning, kultur
National Category
Robotics and automation Natural Language Processing
Research subject
Speech and Music Communication
Identifiers
urn:nbn:se:kth:diva-343863 (URN)978-91-8040-858-5 (ISBN)
Public defence
2024-03-22, https://kth-se.zoom.us/j/65591848998, F3, Lindstedtsvägen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20240226

Available from: 2024-02-26 Created: 2024-02-26 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

fulltext(182 kB)482 downloads
File information
File name FULLTEXT01.pdfFile size 182 kBChecksum SHA-512
5b02108087b526d4e7b2144fec7c5cfa9680ea523961891decf6c3ebb6ff7b112b7e530326e6e9a7d310dfe08d0245f3f325f2afa8963e0989e6e05fea5b59c3
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Cumbal, RonaldMoell, BirgerÁguas Lopes, José DavidEngwall, Olov

Search in DiVA

By author/editor
Cumbal, RonaldMoell, BirgerÁguas Lopes, José DavidEngwall, Olov
By organisation
Speech, Music and Hearing, TMHSpeech Communication and Technology
Other Engineering and Technologies

Search outside of DiVA

GoogleGoogle Scholar
Total: 483 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 972 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf