kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Effects of Different Interaction Contexts when Evaluating Gaze Models in HRI
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-2428-0468
TU Delft Delft, Netherlands.
TNO Den Haag, Netherlands.
Furhat Robotics Stockholm, Sweden.
Show others and affiliations
2020 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We previously introduced a responsive joint attention system that uses multimodal information from users engaged in a spatial reasoning task with a robot and communicates joint attention via the robot's gaze behavior [25]. An initial evaluation of our system with adults showed it to improve users' perceptions of the robot's social presence. To investigate the repeatability of our prior findings across settings and populations, here we conducted two further studies employing the same gaze system with the same robot and task but in different contexts: evaluation of the system with external observers and evaluation with children. The external observer study suggests that third-person perspectives over videos of gaze manipulations can be used either as a manipulation check before committing to costly real-time experiments or to further establish previous findings. However, the replication of our original adults study with children in school did not confirm the effectiveness of our gaze manipulation, suggesting that different interaction contexts can affect the generalizability of results in human-robot interaction gaze studies.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2020. p. 131-138
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords [en]
Joint attention, mutual gaze, social robots, social presence
National Category
Other Engineering and Technologies
Identifiers
URN: urn:nbn:se:kth:diva-267230DOI: 10.1145/3319502.3374810ISI: 000570011000015Scopus ID: 2-s2.0-85082024451OAI: oai:DiVA.org:kth-267230DiVA, id: diva2:1391492
Conference
ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 23-26, 2020, Cambridge, ENGLAND
Note

QC 20200217

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-18Bibliographically approved

Open Access in DiVA

fulltext(4468 kB)1074 downloads
File information
File name FULLTEXT01.pdfFile size 4468 kBChecksum SHA-512
08474444c7d0669c65fce92260e0c436b55bb8bd8c371ea0caf00ec962e99ba2c1e85a238c0b7d16c2642db56c5d043c3e110875ebbac5de38ebb53dd20486f2
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Abelho Pereira, André TiagoGustafson, Joakim

Search in DiVA

By author/editor
Abelho Pereira, André TiagoGustafson, Joakim
By organisation
Speech Communication and TechnologySpeech, Music and Hearing, TMH
Other Engineering and Technologies

Search outside of DiVA

GoogleGoogle Scholar
Total: 1078 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 980 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf