kth.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-1733-7019
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-8601-1370
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-2212-4325
2022 (English)In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, p. 461-469Conference paper, Published paper (Refereed)
Abstract [en]

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

Place, publisher, year, edition, pages
IEEE Computer Society , 2022. p. 461-469
Keywords [en]
Follow-Up Clarifications, Referring Expressions, Resolving Ambiguities, Clarification, Robots, Follow up, Follow-up clarification, Human robots, Interactive system, Real world environments, Task failures, User study, Clarifiers
National Category
Robotics Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-322409DOI: 10.1109/HRI53351.2022.9889368ISI: 000869793600051Scopus ID: 2-s2.0-85127064182OAI: oai:DiVA.org:kth-322409DiVA, id: diva2:1718902
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022
Note

QC 20221214

Available from: 2022-12-14 Created: 2022-12-14 Last updated: 2023-02-23Bibliographically approved
In thesis
1. Robots That Understand Natural Language Instructions and Resolve Ambiguities
Open this publication in new window or tab >>Robots That Understand Natural Language Instructions and Resolve Ambiguities
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Verbal communication is a key challenge in human-robot interaction. For effective verbal interaction, understanding natural language instructions and clarifying ambiguous user requests are crucial for robots. In real-world environments, the instructions can be ambiguous for many reasons. For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities).

While resolving ambiguities, previous works have addressed this problem by only disambiguating the objects in the robot's current view and have not considered ones outside the robot's point of view. To fill in this gap and resolve semantic ambiguities caused by objects possibly being located at multiple places, we present a novel approach by reasoning about their semantic properties. On the other hand, while dealing with ambiguous instructions caused by multiple similar objects in the same location, most of the existing systems ask users to repeat their requests with the assumption that the robot is familiar with all of the objects in the environment. To address this limitation and resolve visual ambiguities, we present an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot.

In summary, in this thesis, we aim to resolve semantic and visual ambiguities to guide a robot's search for described objects specified in user instructions. With semantic disambiguation, we aim to find described objects' locations across an entire household by leveraging object semantics to form clarifying questions when there are ambiguities. After identifying object locations, with visual disambiguation, we aim to identify the specified object among multiple similar objects located in the same space. To achieve this, we suggest a multi-stage approach where the robot first identifies the objects that are fitting to the user's description, and if there are multiple objects, the robot generates clarification questions by describing each potential target object with its spatial relations to other objects. Our results emphasize the significance of semantic and visual disambiguation for successful task completion and human-robot collaboration.

Abstract [sv]

Verbal kommunikation är en nyckelutmaning i människa-robotinteraktion. För att uppnå effektiv verbal interaktion är det avgörande för en robot att den har förståelse för instruktioner på vardagligt språk samt kan få tvetydiga användarförfrågningar förtydligande. I den verkliga världen kan instruktionerna vara tvetydiga och svårtolkade av många anledningar. Till exempel, när en användare ber en robot att hitta och hämta "porslinsmuggen", kan muggen vara både i köksskåpet eller på matsalsbordet, beroende på om den är ren eller full (semantiska oklarheter). Dessutom kan det finnas flera muggar på samma plats, och roboten kan behöva disambiguera dem genom att ställa följdfrågor baserade på deras utmärkande egenskaper, såsom färg eller rumsliga relationer till andra objekt (visuella tvetydigheter).

När tvetydigheter löses, har tidigare arbeten tagit itu med detta problem genom att endast disambiguera objekten i robotens befintliga vy och inte fokuserat på sådana som ligger utanför robotens synvinkel. För att lösa semantiska tvetydigheter orsakade av objekt som eventuellt finns på flera platser, presenterar vi ett nytt tillvägagångssätt där vi resonerar om objektens semantiska egenskaper. Å andra sidan, medan man hanterar tvetydiga instruktioner orsakade av flera liknande objekt på samma plats, ber de flesta  befintliga systemen att användarna upprepar sina förfrågningar med antagandet att roboten är bekant med alla objekt i miljön. För att poängtera denna begränsning och lösa visuella oklarheter, presenterar vi ett interaktivt system som introducerar uppföljande förtydliganden för att disambiguera de beskrivna objekten med hjälp av den information som roboten kunde förstå från begäran och objekten i miljön som är kända för robot.

För att sammanfatta, i denna avhandling ämnar vi att lösa semantiska och visuella oklarheter för att vägleda en robots sökning efter beskrivna objekt specificerade i användarinstruktioner. Med semantisk disambiguering strävar vi efter att hitta det beskrivna objektets placering i ett helt hushåll. Detta genom att använda objektets semantik för att skapa klargörande frågor när det finns oklarheter. Efter att ha identifierat objektplaceringar, med visuell disambiguering, strävar vi efter att identifiera det angivna objektet bland flera liknande objekt placerade i samma utrymme. För att uppnå detta föreslår vi ett  tillvägagångssätt i flera steg där roboten först identifierar de objekt som passar användarens beskrivning, och om det finns flera objekt ställer roboten följdfrågor för att förtydliga genom att beskriva varje potentiellt målobjekt med dess rumsliga relationer till andra föremål. Våra resultat betonar betydelsen av semantisk och visuell disambiguering för att uppnå framgångsrik slutförande av uppgifter för samarbetet mellan människa och robot.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2023. p. 45
Series
TRITA-EECS-AVL ; 2023:16
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-324232 (URN)978-91-8040-491-4 (ISBN)
Public defence
2023-03-17, Zoom: https://kth-se.zoom.us/j/66504888477, F3, Lindstedtsvägen 26, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20230223

Available from: 2023-02-23 Created: 2023-02-23 Last updated: 2023-02-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Dogan, Fethiye IrmakTorre, IlariaLeite, Iolanda

Search in DiVA

By author/editor
Dogan, Fethiye IrmakTorre, IlariaLeite, Iolanda
By organisation
Robotics, Perception and Learning, RPL
RoboticsHuman Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 219 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf