Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
KTH-3D-TOTAL: A 3D dataset for discovering spatial structures for long-term autonomous learning
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-0448-3786
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
Show others and affiliations
2014 (English)In: 2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, IEEE , 2014, 1528-1535 p.Conference paper, Published paper (Refereed)
Abstract [en]

Long-term autonomous learning of human environments entails modelling and generalizing over distinct variations in: object instances in different scenes, and different scenes with respect to space and time. It is crucial for the robot to recognize the structure and context in spatial arrangements and exploit these to learn models which capture the essence of these distinct variations. Table-tops posses a typical structure repeatedly seen in human environments and are identified by characteristics of being personal spaces of diverse functionalities and dynamically changing due to human interactions. In this paper, we present a 3D dataset of 20 office table-tops manually observed and scanned 3 times a day as regularly as possible over 19 days (461 scenes) and subsequently, manually annotated with 18 different object classes, including multiple instances. We analyse the dataset to discover spatial structures and patterns in their variations. The dataset can, for example, be used to study the spatial relations between objects and long-term environment models for applications such as activity recognition, context and functionality estimation and anomaly detection.

Place, publisher, year, edition, pages
IEEE , 2014. 1528-1535 p.
Keyword [en]
Robotics, Activity recognition, Autonomous learning, Environment models, Human interactions, Multiple instances, Spatial arrangements, Spatial structure, Typical structures, Computer vision
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-166173DOI: 10.1109/ICARCV.2014.7064543Scopus ID: 2-s2.0-84927722286ISBN: 9781479951994 (print)OAI: oai:DiVA.org:kth-166173DiVA: diva2:809433
Conference
2014 13th International Conference on Control Automation Robotics and Vision, ICARCV 2014, Singapore, Singapore, 10 December 2014 through 12 December 2014
Note

QC 20150504

Available from: 2015-05-04 Created: 2015-05-04 Last updated: 2015-05-04Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Folkesson, JohnJensfelt, Patric

Search in DiVA

By author/editor
Thippur, AkshayaAmbrus, RaresDel Burgo, Adria GallartFolkesson, JohnJensfelt, Patric
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 119 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf