kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Real-Time Coordination in Human-Robot Interaction Using Face and Voice
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8579-1790
2016 (English)In: The AI Magazine, ISSN 0738-4602, E-ISSN 2371-9621, Vol. 37, no 4, p. 19-31Article in journal (Refereed) Published
Abstract [en]

When humans interact and collaborate with each other, they coordinate their turn-taking behaviors using verbal and nonverbal signals, expressed in the face and voice. If robots of the future are supposed to engage in social interaction with humans, it is essential that they can generate and understand these behaviors. In this article, I give an overview of several studies that show how humans in interaction with a humanlike robot make use of the same coordination signals typically found in studies on human-human interaction, and that it is possible to automatically detect and combine these cues to facilitate real-time coordination. The studies also show that humans react naturally to such signals when used by a robot, without being given any special instructions. They follow the gaze of the robot to disambiguate referring expressions, they conform when the robot selects the next speaker using gaze, and they respond naturally to subtle cues, such as gaze aversion, breathing, facial gestures, and hesitation sounds.

Place, publisher, year, edition, pages
Association for the Advancement of Artificial Intelligence , 2016. Vol. 37, no 4, p. 19-31
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-200411DOI: 10.1609/aimag.v37i4.2686ISI: 000391082300004Scopus ID: 2-s2.0-85019973359OAI: oai:DiVA.org:kth-200411DiVA, id: diva2:1069208
Funder
Swedish Research Council, 2011-6237 2013-1403Available from: 2017-01-27 Created: 2017-01-27 Last updated: 2025-03-14Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Skantze, Gabriel

Search in DiVA

By author/editor
Skantze, Gabriel
By organisation
Speech, Music and Hearing, TMH
In the same journal
The AI Magazine
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 423 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf