kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Estimating Uncertainty in Task Oriented Dialogue
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8874-6629
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-0397-6442
2019 (English)In: ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction / [ed] Wen Gao, Helen Mei Ling Meng, Matthew Turk, Susan R. Fussell, ACM Digital Library, 2019, p. 414-418Conference paper, Published paper (Refereed)
Abstract [en]

Situated multimodal systems that instruct humans need to handle user uncertainties, as expressed in behaviour, and plan their actions accordingly. Speakers’ decision to reformulate or repair previous utterances depends greatly on the listeners’ signals of uncertainty. In this paper, we estimate uncertainty in a situated guided task, as leveraged in non-verbal cues expressed by the listener, and predict that the speaker will reformulate their utterance. We use a corpus where people instruct how to assemble furniture, and extract their multimodal features. While uncertainty is in cases ver- bally expressed, most instances are expressed non-verbally, which indicates the importance of multimodal approaches. In this work, we present a model for uncertainty estimation. Our findings indicate that uncertainty estimation from non- verbal cues works well, and can exceed human annotator performance when verbal features cannot be perceived.

Place, publisher, year, edition, pages
ACM Digital Library, 2019. p. 414-418
Keywords [en]
situated interaction, dialogue and discourse, grounding
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-261628DOI: 10.1145/3340555.3353722ISI: 000518657800050Scopus ID: 2-s2.0-85074940956OAI: oai:DiVA.org:kth-261628DiVA, id: diva2:1359302
Conference
21st ACM International Conference on Multimodal Interaction, Suzhou, Jiangsu, China. October 14-18, 2019
Note

QC 20191209. QC 20200214

Part of ISBN 9781450368605

Available from: 2019-10-08 Created: 2019-10-08 Last updated: 2024-10-25Bibliographically approved
In thesis
1. Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation
Open this publication in new window or tab >>Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This dissertation presents advances in HCI through a series of studies focusing on task-oriented interactions between humans and between humans and machines. The notion of mutual understanding is central, also known as grounding in psycholinguistics, in particular how people establish understanding in conversations and what interactional phenomena are present in that process. Addressing the gap in computational models of understanding, interactions in this dissertation are observed through multisensory input and evaluated with statistical and machine-learning models. As it becomes apparent, miscommunication is ordinary in human conversations and therefore embodied computer interfaces interacting with humans are subject to a large number of conversational failures. Investigating how these inter- faces can evaluate human responses to distinguish whether spoken utterances are understood is one of the central contributions of this thesis.

The first papers (Papers A and B) included in this dissertation describe studies on how humans establish understanding incrementally and how they co-produce utterances to resolve misunderstandings in joint-construction tasks. Utilising the same interaction paradigm from such human-human settings, the remaining papers describe collaborative interactions between humans and machines with two central manipulations: embodiment (Papers C, D, E, and F) and conversational failures (Papers D, E, F, and G). The methods used investigate whether embodiment affects grounding behaviours among speakers and what verbal and non-verbal channels are utilised in response and recovery to miscommunication. For application to robotics and conversational user interfaces, failure detection systems are developed predicting in real-time user uncertainty, paving the way for new multimodal computer interfaces that are aware of dialogue breakdown and system failures.

Through the lens of Theory, Studies, and Computation, a comprehensive overview is presented on how mutual understanding has been observed in interactions with humans and between humans and machines. A summary of literature in mutual understanding from psycholinguistics and human-computer interaction perspectives is reported. An overview is also presented on how prior knowledge in mutual understanding has and can be observed through experimentation and empirical studies, along with perspectives of how knowledge acquired through observation is put into practice through the analysis and development of computational models. Derived from literature and empirical observations, the central thesis of this dissertation is that embodiment and mutual understanding are intertwined in task-oriented interactions, both in successful communication but also in situations of miscommunication.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2022. p. xxi, 139
Series
TRITA-EECS-AVL ; 2022-10
Keywords
human-computer interaction, social robots, smart-speakers, multimodal behaviours, social signal processing, common ground, dialogue and discourse, joint-construction tasks, embodiment, conversational failures
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-308927 (URN)978-91-8040-137-1 (ISBN)
Public defence
2022-03-11, https://kth-se.zoom.us/j/62813774919, Kollegiesalen, Brinellvägen 8, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20220216

Available from: 2022-02-16 Created: 2022-02-15 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusConference websiteConference proceedings

Authority records

Kontogiorgos, DimosthenisAbelho Pereira, André TiagoGustafson, Joakim

Search in DiVA

By author/editor
Kontogiorgos, DimosthenisAbelho Pereira, André TiagoGustafson, Joakim
By organisation
Speech, Music and Hearing, TMH
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 480 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf