Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Anticipating many futures: Online human motion prediction and generation for human-robot interaction
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..ORCID iD: 0000-0002-5750-9655
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH Royal Inst Technol, CSC, Robot Percept & Learning Lab RPL, Stockholm, Sweden..ORCID iD: 0000-0003-2965-2953
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, p. 4563-4570Conference paper, Published paper (Refereed)
Abstract [en]

Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2018. p. 4563-4570
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-237164ISI: 000446394503071ISBN: 978-1-5386-3081-5 OAI: oai:DiVA.org:kth-237164DiVA, id: diva2:1258324
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Funder
Swedish Foundation for Strategic Research
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2018-10-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

conference

Authority records BETA

Butepage, JudithKjellström, HedvigKragic, Danica

Search in DiVA

By author/editor
Butepage, JudithKjellström, HedvigKragic, Danica
By organisation
Robotics, perception and learning, RPL
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 71 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf