Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Towards Context-Aware Human-like Pointing Gestures with RL Motion Imitation
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.ORCID-id: 0000-0003-3135-5683
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.ORCID-id: 0000-0002-7801-7617
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.ORCID-id: 0000-0003-1399-6604
2022 (engelsk)Konferansepaper, Oral presentation with published abstract (Fagfellevurdert)
Abstract [en]

Pointing is an important mode of interaction with robots. While large amounts of prior studies focus on recognition of human pointing, there is a lack of investigation into generating context-aware human-like pointing gestures, a shortcoming we hope to address. We first collect a rich dataset of human pointing gestures and corresponding pointing target locations with accurate motion capture. Analysis of the dataset shows that it contains various pointing styles, handedness, and well-distributed target positions in surrounding 3D space in both single-target pointing scenario and two-target point-and-place.We then train reinforcement learning (RL) control policies in physically realistic simulation to imitate the pointing motion in the dataset while maximizing pointing precision reward.We show that our RL motion imitation setup allows models to learn human-like pointing dynamics while maximizing task reward (pointing precision). This is promising for incorporating additional context in the form of task reward to enable flexible context-aware pointing behaviors in a physically realistic environment while retaining human-likeness in pointing motion dynamics.

sted, utgiver, år, opplag, sider
2022. s. 2022-
Emneord [en]
motion generation, reinforcement learning, referring actions, pointing gestures, human-robot interaction, motion capture
HSV kategori
Forskningsprogram
Datalogi
Identifikatorer
URN: urn:nbn:se:kth:diva-313480OAI: oai:DiVA.org:kth-313480DiVA, id: diva2:1664509
Konferanse
Context-Awareness in Human-Robot Interaction: Approaches and Challenges, workshop at 2022 ACM/IEEE International Conference on Human-Robot Interaction
Merknad

QC 20220607

Tilgjengelig fra: 2022-06-03 Laget: 2022-06-03 Sist oppdatert: 2022-06-25bibliografisk kontrollert

Open Access i DiVA

fulltext(6134 kB)315 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 6134 kBChecksum SHA-512
2e269271939e98997f59bf1c837694d4cef5cdbdee647197e5af6f73401635fa73fb1fa606352fead97db2422d64935e18b9b8def13e9195be8def1bac780a7c
Type fulltextMimetype application/pdf

Andre lenker

Conference webpage

Person

Deichler, AnnaWang, SiyangAlexanderson, SimonBeskow, Jonas

Søk i DiVA

Av forfatter/redaktør
Deichler, AnnaWang, SiyangAlexanderson, SimonBeskow, Jonas
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 315 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

urn-nbn

Altmetric

urn-nbn
Totalt: 1065 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf