kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robots That Understand Natural Language Instructions and Resolve Ambiguities
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-1733-7019
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Verbal communication is a key challenge in human-robot interaction. For effective verbal interaction, understanding natural language instructions and clarifying ambiguous user requests are crucial for robots. In real-world environments, the instructions can be ambiguous for many reasons. For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities).

While resolving ambiguities, previous works have addressed this problem by only disambiguating the objects in the robot's current view and have not considered ones outside the robot's point of view. To fill in this gap and resolve semantic ambiguities caused by objects possibly being located at multiple places, we present a novel approach by reasoning about their semantic properties. On the other hand, while dealing with ambiguous instructions caused by multiple similar objects in the same location, most of the existing systems ask users to repeat their requests with the assumption that the robot is familiar with all of the objects in the environment. To address this limitation and resolve visual ambiguities, we present an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot.

In summary, in this thesis, we aim to resolve semantic and visual ambiguities to guide a robot's search for described objects specified in user instructions. With semantic disambiguation, we aim to find described objects' locations across an entire household by leveraging object semantics to form clarifying questions when there are ambiguities. After identifying object locations, with visual disambiguation, we aim to identify the specified object among multiple similar objects located in the same space. To achieve this, we suggest a multi-stage approach where the robot first identifies the objects that are fitting to the user's description, and if there are multiple objects, the robot generates clarification questions by describing each potential target object with its spatial relations to other objects. Our results emphasize the significance of semantic and visual disambiguation for successful task completion and human-robot collaboration.

Abstract [sv]

Verbal kommunikation är en nyckelutmaning i människa-robotinteraktion. För att uppnå effektiv verbal interaktion är det avgörande för en robot att den har förståelse för instruktioner på vardagligt språk samt kan få tvetydiga användarförfrågningar förtydligande. I den verkliga världen kan instruktionerna vara tvetydiga och svårtolkade av många anledningar. Till exempel, när en användare ber en robot att hitta och hämta "porslinsmuggen", kan muggen vara både i köksskåpet eller på matsalsbordet, beroende på om den är ren eller full (semantiska oklarheter). Dessutom kan det finnas flera muggar på samma plats, och roboten kan behöva disambiguera dem genom att ställa följdfrågor baserade på deras utmärkande egenskaper, såsom färg eller rumsliga relationer till andra objekt (visuella tvetydigheter).

När tvetydigheter löses, har tidigare arbeten tagit itu med detta problem genom att endast disambiguera objekten i robotens befintliga vy och inte fokuserat på sådana som ligger utanför robotens synvinkel. För att lösa semantiska tvetydigheter orsakade av objekt som eventuellt finns på flera platser, presenterar vi ett nytt tillvägagångssätt där vi resonerar om objektens semantiska egenskaper. Å andra sidan, medan man hanterar tvetydiga instruktioner orsakade av flera liknande objekt på samma plats, ber de flesta  befintliga systemen att användarna upprepar sina förfrågningar med antagandet att roboten är bekant med alla objekt i miljön. För att poängtera denna begränsning och lösa visuella oklarheter, presenterar vi ett interaktivt system som introducerar uppföljande förtydliganden för att disambiguera de beskrivna objekten med hjälp av den information som roboten kunde förstå från begäran och objekten i miljön som är kända för robot.

För att sammanfatta, i denna avhandling ämnar vi att lösa semantiska och visuella oklarheter för att vägleda en robots sökning efter beskrivna objekt specificerade i användarinstruktioner. Med semantisk disambiguering strävar vi efter att hitta det beskrivna objektets placering i ett helt hushåll. Detta genom att använda objektets semantik för att skapa klargörande frågor när det finns oklarheter. Efter att ha identifierat objektplaceringar, med visuell disambiguering, strävar vi efter att identifiera det angivna objektet bland flera liknande objekt placerade i samma utrymme. För att uppnå detta föreslår vi ett  tillvägagångssätt i flera steg där roboten först identifierar de objekt som passar användarens beskrivning, och om det finns flera objekt ställer roboten följdfrågor för att förtydliga genom att beskriva varje potentiellt målobjekt med dess rumsliga relationer till andra föremål. Våra resultat betonar betydelsen av semantisk och visuell disambiguering för att uppnå framgångsrik slutförande av uppgifter för samarbetet mellan människa och robot.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2023. , p. 45
Series
TRITA-EECS-AVL ; 2023:16
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-324232ISBN: 978-91-8040-491-4 (print)OAI: oai:DiVA.org:kth-324232DiVA, id: diva2:1738972
Public defence
2023-03-17, Zoom: https://kth-se.zoom.us/j/66504888477, F3, Lindstedtsvägen 26, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20230223

Available from: 2023-02-23 Created: 2023-02-23 Last updated: 2025-02-09Bibliographically approved
List of papers
1. Semantically-Driven Disambiguation for Human-Robot Interaction
Open this publication in new window or tab >>Semantically-Driven Disambiguation for Human-Robot Interaction
(English)In: Article in journal (Other academic) Submitted
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-324193 (URN)
Note

QC 20230227

Available from: 2023-02-22 Created: 2023-02-22 Last updated: 2025-02-07Bibliographically approved
2. Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
Open this publication in new window or tab >>Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
2023 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Frontiers Media SA, 2023
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-324198 (URN)10.3389/frobt.2022.937772 (DOI)000922060000001 ()36704241 (PubMedID)2-s2.0-85146984376 (Scopus ID)
Funder
Swedish Research Council, 2017–05189NordForsk, S-FACTOR projectKTH Royal Institute of Technology, Digital Futures Research CenterKnut and Alice Wallenberg Foundation, Wallenberg Al, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research, SSF FFL18-019KTH Royal Institute of Technology, Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications
Note

QC 20230320

Available from: 2023-02-22 Created: 2023-02-22 Last updated: 2025-02-07Bibliographically approved
3. Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
Open this publication in new window or tab >>Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
2019 (English)In: IEEE International Conference on Intelligent Robots and Systems, Institute of Electrical and Electronics Engineers (IEEE) , 2019, p. 4992-4999Conference paper, Published paper (Refereed)
Abstract [en]

Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (30% better) and would prefer to use (32% more often).

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
Keywords
Deep learning, Intelligent robots, Comprehension tasks, Generating referring expressions, Real world environments, Referring expressions, Rule-based method, Spatial relations, State-of-the-art algorithms, Two stage approach, Human robot interaction
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-274739 (URN)10.1109/IROS40897.2019.8968510 (DOI)000544658404013 ()2-s2.0-85081154190 (Scopus ID)
Conference
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, 3-8 November 2019, Macau, China
Note

QC 20200626

Part of ISBN 9781728140049

Available from: 2020-06-26 Created: 2020-06-26 Last updated: 2024-10-25Bibliographically approved
4. The impact of adding perspective-taking to spatial referencing during human-robot interaction
Open this publication in new window or tab >>The impact of adding perspective-taking to spatial referencing during human-robot interaction
2020 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 134, article id 103654Article in journal (Refereed) Published
Abstract [en]

For effective verbal communication in collaborative tasks, robots need to account for the different perspectives of their human partners when referring to objects in a shared space. For example, when a robot helps its partner find correct pieces while assembling furniture, it needs to understand how its collaborator perceives the world and refer to objects accordingly. In this work, we propose a method to endow robots with perspective-taking abilities while spatially referring to objects. To examine the impact of our proposed method, we report the results of a user study showing that when the objects are spatially described from the users' perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.

Place, publisher, year, edition, pages
ELSEVIER, 2020
Keywords
Perspective-taking, Spatial referring expressions
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-287782 (URN)10.1016/j.robot.2020.103654 (DOI)000586017500010 ()2-s2.0-85095450417 (Scopus ID)
Note

QC 20210126

Available from: 2021-01-26 Created: 2021-01-26 Last updated: 2023-02-23Bibliographically approved
5. Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
Open this publication in new window or tab >>Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
2022 (English)In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, p. 461-469Conference paper, Published paper (Refereed)
Abstract [en]

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

Place, publisher, year, edition, pages
IEEE Computer Society, 2022
Keywords
Follow-Up Clarifications, Referring Expressions, Resolving Ambiguities, Clarification, Robots, Follow up, Follow-up clarification, Human robots, Interactive system, Real world environments, Task failures, User study, Clarifiers
National Category
Robotics and automation Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-322409 (URN)10.1109/HRI53351.2022.9889368 (DOI)000869793600051 ()2-s2.0-85127064182 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022
Note

QC 20221214

Available from: 2022-12-14 Created: 2022-12-14 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

Kappa(8266 kB)854 downloads
File information
File name FULLTEXT01.pdfFile size 8266 kBChecksum SHA-512
2ed6f1d219e9730c7a7a5bb2befd0a5c746b85ac325cd18ddb17da9c41e154b113176f63e9e11f2d59eb8f3abf2033af63e0d26b22fd2de6253b44fc85acbd6b
Type fulltextMimetype application/pdf

Other links

zoom link for online defense

Authority records

Dogan, Fethiye Irmak

Search in DiVA

By author/editor
Dogan, Fethiye Irmak
By organisation
Robotics, Perception and Learning, RPL
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar
Total: 854 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1581 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf