Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adapting robot behavior for human-robot interaction
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. (CAS)ORCID iD: 0000-0003-2078-8854
Show others and affiliations
2008 (English)In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 4, 911-916 p.Article in journal (Refereed) Published
Abstract [en]

Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.

Place, publisher, year, edition, pages
IEEE Press, 2008. Vol. 24, no 4, 911-916 p.
Keyword [en]
behavior adaptation, human-robot interactions, policy gradient reinforcement learning (PGRL), proxemics
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-36338DOI: 10.1109/TRO.2008.926867ISI: 000258617900014Scopus ID: 2-s2.0-50649121596OAI: oai:DiVA.org:kth-36338DiVA: diva2:430630
Note
© 2008 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20110711Available from: 2011-12-20 Created: 2011-07-11 Last updated: 2011-12-20Bibliographically approved

Open Access in DiVA

Smith_TRO_08.pdf(291 kB)721 downloads
File information
File name FULLTEXT01.pdfFile size 291 kBChecksum SHA-512
f8b9f28a385b0f9023f49a5ed1821cbbdcd4e7310089179cb640c7e4f2232abb770f64a4f022a74eac654c3ed1d1807a008a4621804d726aac106b515dab1e04
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusIEEE Xplore

Authority records BETA

Smith, Christian

Search in DiVA

By author/editor
Smith, Christian
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 721 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf