Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Unsupervised robot learning to predict person motion
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (CAS/CVAP/CSC)ORCID iD: 0000-0002-7796-1438
2015 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, 691-696 p.Conference paper, Published paper (Refereed)
Abstract [en]

Socially interacting robots will need to understand the intentions and recognize the behaviors of people they come in contact with. In this paper we look at how a robot can learn to recognize and predict people's intended path based on its own observations of people over time. Our approach uses people tracking on the robot from either RGBD cameras or LIDAR. The tracks are separated into homogeneous motion classes using a pre-trained SVM. Then the individual classes are clustered and prototypes are extracted from each cluster. These are then used to predict a person's future motion based on matching to a partial prototype and using the rest of the prototype as the predicted motion. Results from experiments in a kitchen environment in our lab demonstrate the capabilities of the proposed method.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015. no June, 691-696 p.
Keyword [en]
Forecasting, Robots, Target tracking, Path-based, People tracking, Rgb-d cameras, Robotics
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-176132DOI: 10.1109/ICRA.2015.7139254ISI: 000370974900100Scopus ID: 2-s2.0-84938228810OAI: oai:DiVA.org:kth-176132DiVA: diva2:875654
Conference
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20151201. QC 20160411

Available from: 2015-12-01 Created: 2015-11-02 Last updated: 2016-11-23Bibliographically approved

Open Access in DiVA

fulltext(1337 kB)62 downloads
File information
File name FULLTEXT01.pdfFile size 1337 kBChecksum SHA-512
13814b5ecb462de0158b03ef3a1f9418188e76108d88ce247af535f284a29edaa36ee153846f907d7dd23fa32bb3ab3ef5710fb1343ba10efe7f555994e19cb2
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusIEEEXplore

Authority records BETA

Folkesson, John

Search in DiVA

By author/editor
Xiao, ShuangWang, ZhanFolkesson, John
By organisation
Centre for Autonomous Systems, CASComputer Vision and Active Perception, CVAP
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 62 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 190 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf