kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Modeling Feedback in Interaction With Conversational Agents—A Review
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-0112-6732
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8579-1790
2022 (English)In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 4, article id 744574Article, review/survey (Refereed) Published
Abstract [en]

Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.

Place, publisher, year, edition, pages
Frontiers Media SA , 2022. Vol. 4, article id 744574
Keywords [en]
feedback, grounding, spoken dialogue, multimodal signals, human-agent interaction, review
Keywords [sv]
återmatning, reaktion, dialog, multimodala signaler, multimodalitet, människa-agent-kommunikation, recension
National Category
Human Computer Interaction
Research subject
Human-computer Interaction; Speech and Music Communication
Identifiers
URN: urn:nbn:se:kth:diva-310401DOI: 10.3389/fcomp.2022.744574ISI: 000778821100001Scopus ID: 2-s2.0-85127522996OAI: oai:DiVA.org:kth-310401DiVA, id: diva2:1648265
Projects
COIN-SSFConstructing Explainability (438445824)
Funder
Swedish Foundation for Strategic ResearchGerman Research Foundation (DFG)
Note

QC 20220429

Available from: 2022-03-30 Created: 2022-03-30 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

fulltext(1129 kB)474 downloads
File information
File name FULLTEXT01.pdfFile size 1129 kBChecksum SHA-512
f20e71d7ab0b3f5674da3f906ac5fa23f87a75763df5d46418fbcd138d4eb79817d02d501a487e33f608bbb6533e9c3d6ffe933dc4644b2a5328a53f507e7a2d
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Axelsson, AgnesSkantze, Gabriel

Search in DiVA

By author/editor
Axelsson, AgnesSkantze, Gabriel
By organisation
Speech, Music and Hearing, TMH
In the same journal
Frontiers in Computer Science
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 474 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1274 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf