Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
Show others and affiliations
2009 (English)In: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, ISSN 0219-8436, Vol. 6, no 3, 387-434 p.Article in journal (Refereed) Published
Abstract [en]

A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.

Place, publisher, year, edition, pages
2009. Vol. 6, no 3, 387-434 p.
Keyword [en]
Perception, attention, reasoning, neuroscience, grasping, affordances, cortical motor system, volume-bounding-box, frontal-cortex, object, detection, shape, attention, parietal, connections, premotor, areas
National Category
Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-18776DOI: 10.1142/s0219843609001796ISI: 000270041900004Scopus ID: 2-s2.0-70349576912OAI: oai:DiVA.org:kth-18776DiVA: diva2:336823
Note
QC 20100525 Workshop on the Active Vision of Humanoids held at the Conference on Humanoid Rotobics, Pittsburgh, PA, NOV, 2007Available from: 2010-08-05 Created: 2010-08-05 Last updated: 2012-01-28Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Kragic, Danica

Search in DiVA

By author/editor
Bohg, JeannetteBarck-Holst, CarlHübner, KaiRalph, MariaRasolzadeh, BabakSong, DanKragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 58 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf