Change search
ReferencesLink to record
Permanent link

Direct link
Fluent Human-Robot Dialogues About Grounded Objects in Home Environments
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
2014 (English)In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 6, no 4, 914-927 p.Article in journal (Refereed) Published
Abstract [en]

To provide a spoken interaction between robots and human users, an internal representation of the robots sensory information must be available at a semantic level and accessible to a dialogue system in order to be used in a human-like and intuitive manner. In this paper, we integrate the fields of perceptual anchoring (which creates and maintains the symbol-percept correspondence of objects) in robotics with multimodal dialogues in order to achieve a fluent interaction between humans and robots when talking about objects. These everyday objects are located in a so-called symbiotic system where humans, robots, and sensors are co-operating in a home environment. To orchestrate the dialogue system, the IrisTK dialogue platform is used. The IrisTK system is based on modelling the interaction of events, between different modules, e.g. speech recognizer, face tracker, etc. This system is running on a mobile robot device, which is part of a distributed sensor network. A perceptual anchoring framework, recognizes objects placed in the home and maintains a consistent identity of the objects consisting of their symbolic and perceptual data. Particular effort is placed on creating flexible dialogues where requests to objects can be made in a variety of ways. Experimental validation consists of evaluating the system when many objects are possible candidates for satisfying these requests.

Place, publisher, year, edition, pages
2014. Vol. 6, no 4, 914-927 p.
Keyword [en]
Human-robot interaction, Perceptual anchoring, Symbol grounding, Spoken dialogue systems, Social robotics
National Category
Human Computer Interaction
URN: urn:nbn:se:kth:diva-158441DOI: 10.1007/s12559-014-9291-yISI: 000345994900022OAI: diva2:777267
Swedish Research Council

QC 20150108

Available from: 2015-01-08 Created: 2015-01-08 Last updated: 2015-01-08Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Al Moubayed, Samer
By organisation
Speech, Music and Hearing, TMH
In the same journal
Cognitive Computation
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 22 hits
ReferencesLink to record
Permanent link

Direct link