kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH, Speech Communication and Technology. (Speech, Music, and Hearing)ORCID iD: 0000-0002-8874-6629
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This dissertation presents advances in HCI through a series of studies focusing on task-oriented interactions between humans and between humans and machines. The notion of mutual understanding is central, also known as grounding in psycholinguistics, in particular how people establish understanding in conversations and what interactional phenomena are present in that process. Addressing the gap in computational models of understanding, interactions in this dissertation are observed through multisensory input and evaluated with statistical and machine-learning models. As it becomes apparent, miscommunication is ordinary in human conversations and therefore embodied computer interfaces interacting with humans are subject to a large number of conversational failures. Investigating how these inter- faces can evaluate human responses to distinguish whether spoken utterances are understood is one of the central contributions of this thesis.

The first papers (Papers A and B) included in this dissertation describe studies on how humans establish understanding incrementally and how they co-produce utterances to resolve misunderstandings in joint-construction tasks. Utilising the same interaction paradigm from such human-human settings, the remaining papers describe collaborative interactions between humans and machines with two central manipulations: embodiment (Papers C, D, E, and F) and conversational failures (Papers D, E, F, and G). The methods used investigate whether embodiment affects grounding behaviours among speakers and what verbal and non-verbal channels are utilised in response and recovery to miscommunication. For application to robotics and conversational user interfaces, failure detection systems are developed predicting in real-time user uncertainty, paving the way for new multimodal computer interfaces that are aware of dialogue breakdown and system failures.

Through the lens of Theory, Studies, and Computation, a comprehensive overview is presented on how mutual understanding has been observed in interactions with humans and between humans and machines. A summary of literature in mutual understanding from psycholinguistics and human-computer interaction perspectives is reported. An overview is also presented on how prior knowledge in mutual understanding has and can be observed through experimentation and empirical studies, along with perspectives of how knowledge acquired through observation is put into practice through the analysis and development of computational models. Derived from literature and empirical observations, the central thesis of this dissertation is that embodiment and mutual understanding are intertwined in task-oriented interactions, both in successful communication but also in situations of miscommunication.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2022. , p. xxi, 139
Series
TRITA-EECS-AVL ; 2022-10
Keywords [en]
human-computer interaction, social robots, smart-speakers, multimodal behaviours, social signal processing, common ground, dialogue and discourse, joint-construction tasks, embodiment, conversational failures
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-308927ISBN: 978-91-8040-137-1 (print)OAI: oai:DiVA.org:kth-308927DiVA, id: diva2:1638035
Public defence
2022-03-11, https://kth-se.zoom.us/j/62813774919, Kollegiesalen, Brinellvägen 8, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20220216

Available from: 2022-02-16 Created: 2022-02-15 Last updated: 2022-06-25Bibliographically approved
List of papers
1. Estimating Uncertainty in Task Oriented Dialogue
Open this publication in new window or tab >>Estimating Uncertainty in Task Oriented Dialogue
2019 (English)In: ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction / [ed] Wen Gao, Helen Mei Ling Meng, Matthew Turk, Susan R. Fussell, ACM Digital Library, 2019, p. 414-418Conference paper, Published paper (Refereed)
Abstract [en]

Situated multimodal systems that instruct humans need to handle user uncertainties, as expressed in behaviour, and plan their actions accordingly. Speakers’ decision to reformulate or repair previous utterances depends greatly on the listeners’ signals of uncertainty. In this paper, we estimate uncertainty in a situated guided task, as leveraged in non-verbal cues expressed by the listener, and predict that the speaker will reformulate their utterance. We use a corpus where people instruct how to assemble furniture, and extract their multimodal features. While uncertainty is in cases ver- bally expressed, most instances are expressed non-verbally, which indicates the importance of multimodal approaches. In this work, we present a model for uncertainty estimation. Our findings indicate that uncertainty estimation from non- verbal cues works well, and can exceed human annotator performance when verbal features cannot be perceived.

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
situated interaction, dialogue and discourse, grounding
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-261628 (URN)10.1145/3340555.3353722 (DOI)000518657800050 ()2-s2.0-85074940956 (Scopus ID)
Conference
21st ACM International Conference on Multimodal Interaction, Suzhou, Jiangsu, China. October 14-18, 2019
Note

QC 20191209. QC 20200214

Part of ISBN 9781450368605

Available from: 2019-10-08 Created: 2019-10-08 Last updated: 2024-10-25Bibliographically approved
2. Measuring Collaboration Load With Pupillary Responses-Implications for the Design of Instructions in Task-Oriented HRI
Open this publication in new window or tab >>Measuring Collaboration Load With Pupillary Responses-Implications for the Design of Instructions in Task-Oriented HRI
2021 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 12, article id 623657Article in journal (Refereed) Published
Abstract [en]

In face-to-face interaction, speakers establish common ground incrementally, the mutual belief of understanding. Instead of constructing "one-shot" complete utterances, speakers tend to package pieces of information in smaller fragments (what Clark calls "installments"). The aim of this paper was to investigate how speakers' fragmented construction of utterances affect the cognitive load of the conversational partners during utterance production and comprehension. In a collaborative furniture assembly, participants instructed each other how to build an IKEA stool. Pupil diameter was measured as an outcome of effort and cognitive processing in the collaborative task. Pupillometry data and eye-gaze behaviour indicated that more cognitive resources were required by speakers to construct fragmented rather than non-fragmented utterances. Such construction of utterances by audience design was associated with higher cognitive load for speakers. We also found that listeners' cognitive resources were decreased in each new speaker utterance, suggesting that speakers' efforts in the fragmented construction of utterances were successful to resolve ambiguities. The results indicated that speaking in fragments is beneficial for minimising collaboration load, however, adapting to listeners is a demanding task. We discuss implications for future empirical research on the design of task-oriented human-robot interactions, and how assistive social robots may benefit from the production of fragmented instructions.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
social signal processing, pupillometry, dialogue and discourse, collaboration, common ground, least-collaborative-effort, situated interaction, referential communication
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:kth:diva-299683 (URN)10.3389/fpsyg.2021.623657 (DOI)000680354600001 ()34354623 (PubMedID)2-s2.0-85111926686 (Scopus ID)
Note

QC 20210818

Available from: 2021-08-18 Created: 2021-08-18 Last updated: 2022-06-25Bibliographically approved
3. The effects of anthropomorphism and non-verbal social behaviour in virtual assistants
Open this publication in new window or tab >>The effects of anthropomorphism and non-verbal social behaviour in virtual assistants
Show others...
2019 (English)In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 133-140Conference paper, Published paper (Refereed)
Abstract [en]

The adoption of virtual assistants is growing at a rapid pace. However, these assistants are not optimised to simulate key social aspects of human conversational environments. Humans are intellectually biased toward social activity when facing anthropomorphic agents or when presented with subtle social cues. In this paper, we test whether humans respond the same way to assistants in guided tasks, when in different forms of embodiment and social behaviour. In a within-subject study (N=30), we asked subjects to engage in dialogue with a smart speaker and a social robot.We observed shifting of interactive behaviour, as shown in behavioural and subjective measures. Our findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with nonverbal cues. We found a trade-off between task performance and perceived sociability when controlling for anthropomorphism and social behaviour.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Conversational artificial intelligence, Empirical studies, Human-computer interaction, Smart speakers, Social robots
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-262610 (URN)10.1145/3308532.3329466 (DOI)000556671900030 ()2-s2.0-85069732636 (Scopus ID)
Conference
19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019
Note

QC 20191024

Part of ISBN 9781450366724

Available from: 2019-10-24 Created: 2019-10-24 Last updated: 2024-10-23Bibliographically approved
4. Embodiment Effects in Interactions with Failing Robots
Open this publication in new window or tab >>Embodiment Effects in Interactions with Failing Robots
Show others...
2020 (English)In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM Digital Library, 2020Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.

Place, publisher, year, edition, pages
ACM Digital Library, 2020
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-267232 (URN)10.1145/3313831.3376372 (DOI)000695438100045 ()2-s2.0-85081988472 (Scopus ID)
Conference
SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, April 25–30, 2020, Honolulu, HI, USA
Note

QC 20211011

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-18Bibliographically approved
5. Grounding behaviours with conversational interfaces: effects of embodiment and failures
Open this publication in new window or tab >>Grounding behaviours with conversational interfaces: effects of embodiment and failures
2021 (English)In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 15, no 2, p. 239-254Article in journal (Refereed) Published
Abstract [en]

Conversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.

Place, publisher, year, edition, pages
Springer Nature, 2021
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-295461 (URN)10.1007/s12193-021-00366-y (DOI)000632299500001 ()2-s2.0-85103164370 (Scopus ID)
Note

QC 20250331

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2025-03-31Bibliographically approved
6. Behavioural Responses to Robot Conversational Failures
Open this publication in new window or tab >>Behavioural Responses to Robot Conversational Failures
Show others...
2020 (English)In: HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Humans and robots will increasingly collaborate in domestic environments which will cause users to encounter more failures in interactions. Robots should be able to infer conversational failures by detecting human users’ behavioural and social signals. In this paper, we study and analyse these behavioural cues in response to robot conversational failures. Using a guided task corpus, where robot embodiment and time pressure are manipulated, we ask human annotators to estimate whether user affective states differ during various types of robot failures. We also train a random forest classifier to detect whether a robot failure has occurred and compare results to human annotator benchmarks. Our findings show that human-like robots augment users’ reactions to failures, as shown in users’ visual attention, in comparison to non-humanlike smart-speaker embodiments. The results further suggest that speech behaviours are utilised more in responses to failures when non-human-like designs are present. This is particularly important to robot failure detection mechanisms that may need to consider the robot’s physical design in its failure detection model.

Place, publisher, year, edition, pages
ACM Digital Library, 2020
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-267231 (URN)10.1145/3319502.3374782 (DOI)000570011000007 ()2-s2.0-85082009759 (Scopus ID)
Conference
International Conference on Human Robot Interaction (HRI), HRI ’20, March 23–26, 2020, Cambridge, United Kingdom
Note

QC 20200214

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-18Bibliographically approved
7. A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures
Open this publication in new window or tab >>A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures
2021 (English)In: ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction, Association for Computing Machinery (ACM) , 2021, p. 112-120Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we analyze multimodal behavioral responses to robot failures across different tasks. Two multimodal datasets are examined in which humans interact with guided-task robots in task-oriented dialogues. In both datasets, the robots simulated failures of conversational breakdown and miscommunication typically observed in human-robot interactions. We closely examine human reactions to these failures looking at facial and acoustic features. Our analyses identify the significant behavioral features for automatic detection of such failures in interaction. We also examine human responses to different types of robot failures and if failures occurred early or late in the interaction cause variation in the responses. Our findings indicate that several nonverbal behaviors are consistently present in responses to robots' failures, e.g., gaze and speech prosody, whereas, linguistic features appear to be task-dependent. We discuss how these findings may generalize to other tasks, and how autonomous robots may identify opportunities to detect and recover from failures in interactions with humans.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2021
Keywords
behavioral responses, miscommunication, social signal processing, Human robot interaction, Linguistics, Man machine systems, Signal processing, Behavioral response, Cross corpus analysis, Facial feature, Human reaction, Humans-robot interactions, Multi-modal, Multi-modal dataset, Task-oriented, Behavioral research
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-299870 (URN)10.1145/3462244.3479887 (DOI)001438610800016 ()2-s2.0-85119019530 (Scopus ID)
Conference
23rd ACM International Conference on Multimodal Interaction, ICMI 2021, 18 October 2021 through 22 October 2021
Note

Part of proceedings: ISBN 978-1-4503-8481-0

QC 20220602

Available from: 2021-08-18 Created: 2021-08-18 Last updated: 2025-12-05Bibliographically approved

Open Access in DiVA

fulltext(68473 kB)1154 downloads
File information
File name FULLTEXT01.pdfFile size 68473 kBChecksum SHA-512
cd02cb471f6be8786f6d5decc690d3f791230b3bbfa6046e809f30e54fd33e316e4323895a91205096702c683721516b4d0cf66e2115215b4ad14132a85df214
Type fulltextMimetype application/pdf

Authority records

Kontogiorgos, Dimosthenis

Search in DiVA

By author/editor
Kontogiorgos, Dimosthenis
By organisation
Speech Communication and Technology
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 1154 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 3935 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf