kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The effects of anthropomorphism and non-verbal social behaviour in virtual assistants
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8874-6629
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
KTH.
Show others and affiliations
2019 (English)In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 133-140Conference paper, Published paper (Refereed)
Abstract [en]

The adoption of virtual assistants is growing at a rapid pace. However, these assistants are not optimised to simulate key social aspects of human conversational environments. Humans are intellectually biased toward social activity when facing anthropomorphic agents or when presented with subtle social cues. In this paper, we test whether humans respond the same way to assistants in guided tasks, when in different forms of embodiment and social behaviour. In a within-subject study (N=30), we asked subjects to engage in dialogue with a smart speaker and a social robot.We observed shifting of interactive behaviour, as shown in behavioural and subjective measures. Our findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with nonverbal cues. We found a trade-off between task performance and perceived sociability when controlling for anthropomorphism and social behaviour.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019. p. 133-140
Keywords [en]
Conversational artificial intelligence, Empirical studies, Human-computer interaction, Smart speakers, Social robots
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-262610DOI: 10.1145/3308532.3329466ISI: 000556671900030Scopus ID: 2-s2.0-85069732636OAI: oai:DiVA.org:kth-262610DiVA, id: diva2:1365339
Conference
19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019
Note

QC 20191024

Part of ISBN 9781450366724

Available from: 2019-10-24 Created: 2019-10-24 Last updated: 2024-10-23Bibliographically approved
In thesis
1. Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation
Open this publication in new window or tab >>Mutual Understanding in Situated Interactions with Conversational User Interfaces: Theory, Studies, and Computation
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This dissertation presents advances in HCI through a series of studies focusing on task-oriented interactions between humans and between humans and machines. The notion of mutual understanding is central, also known as grounding in psycholinguistics, in particular how people establish understanding in conversations and what interactional phenomena are present in that process. Addressing the gap in computational models of understanding, interactions in this dissertation are observed through multisensory input and evaluated with statistical and machine-learning models. As it becomes apparent, miscommunication is ordinary in human conversations and therefore embodied computer interfaces interacting with humans are subject to a large number of conversational failures. Investigating how these inter- faces can evaluate human responses to distinguish whether spoken utterances are understood is one of the central contributions of this thesis.

The first papers (Papers A and B) included in this dissertation describe studies on how humans establish understanding incrementally and how they co-produce utterances to resolve misunderstandings in joint-construction tasks. Utilising the same interaction paradigm from such human-human settings, the remaining papers describe collaborative interactions between humans and machines with two central manipulations: embodiment (Papers C, D, E, and F) and conversational failures (Papers D, E, F, and G). The methods used investigate whether embodiment affects grounding behaviours among speakers and what verbal and non-verbal channels are utilised in response and recovery to miscommunication. For application to robotics and conversational user interfaces, failure detection systems are developed predicting in real-time user uncertainty, paving the way for new multimodal computer interfaces that are aware of dialogue breakdown and system failures.

Through the lens of Theory, Studies, and Computation, a comprehensive overview is presented on how mutual understanding has been observed in interactions with humans and between humans and machines. A summary of literature in mutual understanding from psycholinguistics and human-computer interaction perspectives is reported. An overview is also presented on how prior knowledge in mutual understanding has and can be observed through experimentation and empirical studies, along with perspectives of how knowledge acquired through observation is put into practice through the analysis and development of computational models. Derived from literature and empirical observations, the central thesis of this dissertation is that embodiment and mutual understanding are intertwined in task-oriented interactions, both in successful communication but also in situations of miscommunication.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2022. p. xxi, 139
Series
TRITA-EECS-AVL ; 2022-10
Keywords
human-computer interaction, social robots, smart-speakers, multimodal behaviours, social signal processing, common ground, dialogue and discourse, joint-construction tasks, embodiment, conversational failures
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-308927 (URN)978-91-8040-137-1 (ISBN)
Public defence
2022-03-11, https://kth-se.zoom.us/j/62813774919, Kollegiesalen, Brinellvägen 8, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20220216

Available from: 2022-02-16 Created: 2022-02-15 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kontogiorgos, DimosthenisAbelho Pereira, André TiagoAndersson, OlleGustafson, Joakim

Search in DiVA

By author/editor
Kontogiorgos, DimosthenisAbelho Pereira, André TiagoAndersson, OlleKoivisto, MarcoGonzalez Rabal, ElenaVartiainen, VilleGustafson, Joakim
By organisation
Speech, Music and Hearing, TMHSchool of Engineering Sciences in Chemistry, Biotechnology and Health (CBH)KTH
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 807 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf