Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Robots That Understand Natural Language Instructions and Resolve Ambiguities
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0002-1733-7019
2023 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Verbal communication is a key challenge in human-robot interaction. For effective verbal interaction, understanding natural language instructions and clarifying ambiguous user requests are crucial for robots. In real-world environments, the instructions can be ambiguous for many reasons. For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities).

While resolving ambiguities, previous works have addressed this problem by only disambiguating the objects in the robot's current view and have not considered ones outside the robot's point of view. To fill in this gap and resolve semantic ambiguities caused by objects possibly being located at multiple places, we present a novel approach by reasoning about their semantic properties. On the other hand, while dealing with ambiguous instructions caused by multiple similar objects in the same location, most of the existing systems ask users to repeat their requests with the assumption that the robot is familiar with all of the objects in the environment. To address this limitation and resolve visual ambiguities, we present an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot.

In summary, in this thesis, we aim to resolve semantic and visual ambiguities to guide a robot's search for described objects specified in user instructions. With semantic disambiguation, we aim to find described objects' locations across an entire household by leveraging object semantics to form clarifying questions when there are ambiguities. After identifying object locations, with visual disambiguation, we aim to identify the specified object among multiple similar objects located in the same space. To achieve this, we suggest a multi-stage approach where the robot first identifies the objects that are fitting to the user's description, and if there are multiple objects, the robot generates clarification questions by describing each potential target object with its spatial relations to other objects. Our results emphasize the significance of semantic and visual disambiguation for successful task completion and human-robot collaboration.

Abstract [sv]

Verbal kommunikation är en nyckelutmaning i människa-robotinteraktion. För att uppnå effektiv verbal interaktion är det avgörande för en robot att den har förståelse för instruktioner på vardagligt språk samt kan få tvetydiga användarförfrågningar förtydligande. I den verkliga världen kan instruktionerna vara tvetydiga och svårtolkade av många anledningar. Till exempel, när en användare ber en robot att hitta och hämta "porslinsmuggen", kan muggen vara både i köksskåpet eller på matsalsbordet, beroende på om den är ren eller full (semantiska oklarheter). Dessutom kan det finnas flera muggar på samma plats, och roboten kan behöva disambiguera dem genom att ställa följdfrågor baserade på deras utmärkande egenskaper, såsom färg eller rumsliga relationer till andra objekt (visuella tvetydigheter).

När tvetydigheter löses, har tidigare arbeten tagit itu med detta problem genom att endast disambiguera objekten i robotens befintliga vy och inte fokuserat på sådana som ligger utanför robotens synvinkel. För att lösa semantiska tvetydigheter orsakade av objekt som eventuellt finns på flera platser, presenterar vi ett nytt tillvägagångssätt där vi resonerar om objektens semantiska egenskaper. Å andra sidan, medan man hanterar tvetydiga instruktioner orsakade av flera liknande objekt på samma plats, ber de flesta  befintliga systemen att användarna upprepar sina förfrågningar med antagandet att roboten är bekant med alla objekt i miljön. För att poängtera denna begränsning och lösa visuella oklarheter, presenterar vi ett interaktivt system som introducerar uppföljande förtydliganden för att disambiguera de beskrivna objekten med hjälp av den information som roboten kunde förstå från begäran och objekten i miljön som är kända för robot.

För att sammanfatta, i denna avhandling ämnar vi att lösa semantiska och visuella oklarheter för att vägleda en robots sökning efter beskrivna objekt specificerade i användarinstruktioner. Med semantisk disambiguering strävar vi efter att hitta det beskrivna objektets placering i ett helt hushåll. Detta genom att använda objektets semantik för att skapa klargörande frågor när det finns oklarheter. Efter att ha identifierat objektplaceringar, med visuell disambiguering, strävar vi efter att identifiera det angivna objektet bland flera liknande objekt placerade i samma utrymme. För att uppnå detta föreslår vi ett  tillvägagångssätt i flera steg där roboten först identifierar de objekt som passar användarens beskrivning, och om det finns flera objekt ställer roboten följdfrågor för att förtydliga genom att beskriva varje potentiellt målobjekt med dess rumsliga relationer till andra föremål. Våra resultat betonar betydelsen av semantisk och visuell disambiguering för att uppnå framgångsrik slutförande av uppgifter för samarbetet mellan människa och robot.

sted, utgiver, år, opplag, sider
KTH Royal Institute of Technology, 2023. , s. 45
Serie
TRITA-EECS-AVL ; 2023:16
HSV kategori
Forskningsprogram
Datalogi
Identifikatorer
URN: urn:nbn:se:kth:diva-324232ISBN: 978-91-8040-491-4 (tryckt)OAI: oai:DiVA.org:kth-324232DiVA, id: diva2:1738972
Disputas
2023-03-17, Zoom: https://kth-se.zoom.us/j/66504888477, F3, Lindstedtsvägen 26, Stockholm, 14:00 (engelsk)
Opponent
Veileder
Merknad

QC 20230223

Tilgjengelig fra: 2023-02-23 Laget: 2023-02-23 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Delarbeid
1. Semantically-Driven Disambiguation for Human-Robot Interaction
Åpne denne publikasjonen i ny fane eller vindu >>Semantically-Driven Disambiguation for Human-Robot Interaction
(engelsk)Inngår i: Artikkel i tidsskrift (Annet vitenskapelig) Submitted
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-324193 (URN)
Merknad

QC 20230227

Tilgjengelig fra: 2023-02-22 Laget: 2023-02-22 Sist oppdatert: 2025-02-07bibliografisk kontrollert
2. Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
Åpne denne publikasjonen i ny fane eller vindu >>Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
2023 (engelsk)Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9Artikkel i tidsskrift (Fagfellevurdert) Published
sted, utgiver, år, opplag, sider
Frontiers Media SA, 2023
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-324198 (URN)10.3389/frobt.2022.937772 (DOI)000922060000001 ()36704241 (PubMedID)2-s2.0-85146984376 (Scopus ID)
Forskningsfinansiär
Swedish Research Council, 2017–05189NordForsk, S-FACTOR projectKTH Royal Institute of Technology, Digital Futures Research CenterKnut and Alice Wallenberg Foundation, Wallenberg Al, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research, SSF FFL18-019KTH Royal Institute of Technology, Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications
Merknad

QC 20230320

Tilgjengelig fra: 2023-02-22 Laget: 2023-02-22 Sist oppdatert: 2025-02-07bibliografisk kontrollert
3. Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
Åpne denne publikasjonen i ny fane eller vindu >>Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments
2019 (engelsk)Inngår i: IEEE International Conference on Intelligent Robots and Systems, Institute of Electrical and Electronics Engineers (IEEE) , 2019, s. 4992-4999Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (30% better) and would prefer to use (32% more often).

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2019
Emneord
Deep learning, Intelligent robots, Comprehension tasks, Generating referring expressions, Real world environments, Referring expressions, Rule-based method, Spatial relations, State-of-the-art algorithms, Two stage approach, Human robot interaction
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-274739 (URN)10.1109/IROS40897.2019.8968510 (DOI)000544658404013 ()2-s2.0-85081154190 (Scopus ID)
Konferanse
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, 3-8 November 2019, Macau, China
Merknad

QC 20200626

Part of ISBN 9781728140049

Tilgjengelig fra: 2020-06-26 Laget: 2020-06-26 Sist oppdatert: 2024-10-25bibliografisk kontrollert
4. The impact of adding perspective-taking to spatial referencing during human-robot interaction
Åpne denne publikasjonen i ny fane eller vindu >>The impact of adding perspective-taking to spatial referencing during human-robot interaction
2020 (engelsk)Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 134, artikkel-id 103654Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

For effective verbal communication in collaborative tasks, robots need to account for the different perspectives of their human partners when referring to objects in a shared space. For example, when a robot helps its partner find correct pieces while assembling furniture, it needs to understand how its collaborator perceives the world and refer to objects accordingly. In this work, we propose a method to endow robots with perspective-taking abilities while spatially referring to objects. To examine the impact of our proposed method, we report the results of a user study showing that when the objects are spatially described from the users' perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.

sted, utgiver, år, opplag, sider
ELSEVIER, 2020
Emneord
Perspective-taking, Spatial referring expressions
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-287782 (URN)10.1016/j.robot.2020.103654 (DOI)000586017500010 ()2-s2.0-85095450417 (Scopus ID)
Merknad

QC 20210126

Tilgjengelig fra: 2021-01-26 Laget: 2021-01-26 Sist oppdatert: 2023-02-23bibliografisk kontrollert
5. Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
Åpne denne publikasjonen i ny fane eller vindu >>Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
2022 (engelsk)Inngår i: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, s. 461-469Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2022
Emneord
Follow-Up Clarifications, Referring Expressions, Resolving Ambiguities, Clarification, Robots, Follow up, Follow-up clarification, Human robots, Interactive system, Real world environments, Task failures, User study, Clarifiers
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-322409 (URN)10.1109/HRI53351.2022.9889368 (DOI)000869793600051 ()2-s2.0-85127064182 (Scopus ID)
Konferanse
17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022
Merknad

QC 20221214

Tilgjengelig fra: 2022-12-14 Laget: 2022-12-14 Sist oppdatert: 2025-02-05bibliografisk kontrollert

Open Access i DiVA

Kappa(8266 kB)1082 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 8266 kBChecksum SHA-512
2ed6f1d219e9730c7a7a5bb2befd0a5c746b85ac325cd18ddb17da9c41e154b113176f63e9e11f2d59eb8f3abf2033af63e0d26b22fd2de6253b44fc85acbd6b
Type fulltextMimetype application/pdf

Andre lenker

zoom link for online defense

Person

Dogan, Fethiye Irmak

Søk i DiVA

Av forfatter/redaktør
Dogan, Fethiye Irmak
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 1082 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 1813 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf