Endre søk
Link to record
Permanent link

Direct link
Dogan, Fethiye IrmakORCID iD iconorcid.org/0000-0002-1733-7019
Publikasjoner (10 av 16) Visa alla publikasjoner
Yadollahi, E., Romeo, M., Dogan, F. I., Johal, W., De Graaf, M., Levy-Tzedek, S. & Leite, I. (2024). Explainability for Human-Robot Collaboration. In: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024 (pp. 1364-1366). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>Explainability for Human-Robot Collaboration
Vise andre…
2024 (engelsk)Inngår i: HRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2024, s. 1364-1366Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In human-robot collaboration, explainability bridges the communication gap between complex machine functionalities and humans. An active area of investigation in robotics and AI is understanding and generating explanations that can enhance collaboration and mutual understanding between humans and machines. A key to achieving such seamless collaborations is understanding end-users, whether naive or expert, and tailoring explanation features that are intuitive, user-centred, and contextually relevant. Advancing on the topic not only includes modelling humans' expectations for generating the explanations but also requires the development of metrics to evaluate generated explanations and assess how effectively autonomous systems communicate their intentions, actions, and decision-making rationale. This workshop is designed to tackle the nuanced role of explainability in enhancing the efficiency, safety, and trust in human-robot collaboration. It aims to initiate discussions on the importance of generating and evaluating explainability features developed in autonomous agents. Simultaneously, it addresses various challenges, including bias in explainability and downsides of explainability and deception in human-robot interaction.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2024
Emneord
Explainable Robotics, Human-Centered Robot Explanations, XAI
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-344807 (URN)10.1145/3610978.3638154 (DOI)001255070800301 ()2-s2.0-85188063647 (Scopus ID)
Konferanse
19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024, Boulder, United States of America, Mar 11 2024 - Mar 15 2024
Merknad

QC 20240409

Part of ISBN 9798400703232

Tilgjengelig fra: 2024-03-28 Laget: 2024-03-28 Sist oppdatert: 2024-10-11bibliografisk kontrollert
Hadjiantonis, G., Gillet, S., Vazquez, M., Leite, I. & Dogan, F. I. (2024). Let's move on: Topic Change in Robot-Facilitated Group Discussions. In: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024: . Paper presented at 33rd IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN) - Embracing Human-Centered HRI, AUG 26-30, 2024, Pasadena, CA (pp. 2087-2094). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Let's move on: Topic Change in Robot-Facilitated Group Discussions
Vise andre…
2024 (engelsk)Inngår i: 2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, s. 2087-2094Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Robot-moderated group discussions have the potential to facilitate engaging and productive interactions among human participants. Previous work on topic management in conversational agents has predominantly focused on human engagement and topic personalization, with the agent having an active role in the discussion. Also, studies have shown the usefulness of including robots in groups, yet further exploration is still needed for robots to learn when to change the topic while facilitating discussions. Accordingly, our work investigates the suitability of machine-learning models and audiovisual non-verbal features in predicting appropriate topic changes. We utilized interactions between a robot moderator and human participants, which we annotated and used for extracting acoustic and body language-related features. We provide a detailed analysis of the performance of machine learning approaches using sequential and non-sequential data with different sets of features. The results indicate promising performance in classifying inappropriate topic changes, outperforming rule-based approaches. Additionally, acoustic features exhibited comparable performance and robustness compared to the complete set of multimodal features. Our annotated data is publicly available at https://github.com/ghadj/topic-change-robot-discussions-data-2024.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2024
Serie
IEEE RO-MAN, ISSN 1944-9445
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-358781 (URN)10.1109/RO-MAN60168.2024.10731390 (DOI)001348918600276 ()2-s2.0-85209792264 (Scopus ID)
Konferanse
33rd IEEE International Conference on Robot and Human Interactive Communication (IEEE RO-MAN) - Embracing Human-Centered HRI, AUG 26-30, 2024, Pasadena, CA
Merknad

Part of ISBN 979-8-3503-7503-9; 979-8-3503-7502-2

QC 20250121

Tilgjengelig fra: 2025-01-21 Laget: 2025-01-21 Sist oppdatert: 2025-01-21bibliografisk kontrollert
Bartoli, E., Dogan, F. I. & Leite, I. (2023). Contextualized Knowledge Graph Embeddings for Activity Prediction in Service Robotics. In: : . Paper presented at Workshop on Semantic Scene Understanding for Human-Robot Interaction, ACM/IEEE International Conference on Human Robot Interaction.
Åpne denne publikasjonen i ny fane eller vindu >>Contextualized Knowledge Graph Embeddings for Activity Prediction in Service Robotics
2023 (engelsk)Konferansepaper, Oral presentation only (Fagfellevurdert)
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-324724 (URN)
Konferanse
Workshop on Semantic Scene Understanding for Human-Robot Interaction, ACM/IEEE International Conference on Human Robot Interaction
Merknad

QC 20230314

Tilgjengelig fra: 2023-03-13 Laget: 2023-03-13 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Dogan, F. I., Melsión, G. I. & Leite, I. (2023). Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments. Frontiers in Robotics and AI, 9
Åpne denne publikasjonen i ny fane eller vindu >>Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments
2023 (engelsk)Inngår i: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9Artikkel i tidsskrift (Fagfellevurdert) Published
sted, utgiver, år, opplag, sider
Frontiers Media SA, 2023
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-324198 (URN)10.3389/frobt.2022.937772 (DOI)000922060000001 ()36704241 (PubMedID)2-s2.0-85146984376 (Scopus ID)
Forskningsfinansiär
Swedish Research Council, 2017–05189NordForsk, S-FACTOR projectKTH Royal Institute of Technology, Digital Futures Research CenterKnut and Alice Wallenberg Foundation, Wallenberg Al, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research, SSF FFL18-019KTH Royal Institute of Technology, Vinnova Competence Center for Trustworthy Edge Computing Systems and Applications
Merknad

QC 20230320

Tilgjengelig fra: 2023-02-22 Laget: 2023-02-22 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Dogan, F. I. (2023). Robots That Understand Natural Language Instructions and Resolve Ambiguities. (Doctoral dissertation). KTH Royal Institute of Technology
Åpne denne publikasjonen i ny fane eller vindu >>Robots That Understand Natural Language Instructions and Resolve Ambiguities
2023 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Verbal communication is a key challenge in human-robot interaction. For effective verbal interaction, understanding natural language instructions and clarifying ambiguous user requests are crucial for robots. In real-world environments, the instructions can be ambiguous for many reasons. For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities).

While resolving ambiguities, previous works have addressed this problem by only disambiguating the objects in the robot's current view and have not considered ones outside the robot's point of view. To fill in this gap and resolve semantic ambiguities caused by objects possibly being located at multiple places, we present a novel approach by reasoning about their semantic properties. On the other hand, while dealing with ambiguous instructions caused by multiple similar objects in the same location, most of the existing systems ask users to repeat their requests with the assumption that the robot is familiar with all of the objects in the environment. To address this limitation and resolve visual ambiguities, we present an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot.

In summary, in this thesis, we aim to resolve semantic and visual ambiguities to guide a robot's search for described objects specified in user instructions. With semantic disambiguation, we aim to find described objects' locations across an entire household by leveraging object semantics to form clarifying questions when there are ambiguities. After identifying object locations, with visual disambiguation, we aim to identify the specified object among multiple similar objects located in the same space. To achieve this, we suggest a multi-stage approach where the robot first identifies the objects that are fitting to the user's description, and if there are multiple objects, the robot generates clarification questions by describing each potential target object with its spatial relations to other objects. Our results emphasize the significance of semantic and visual disambiguation for successful task completion and human-robot collaboration.

Abstract [sv]

Verbal kommunikation är en nyckelutmaning i människa-robotinteraktion. För att uppnå effektiv verbal interaktion är det avgörande för en robot att den har förståelse för instruktioner på vardagligt språk samt kan få tvetydiga användarförfrågningar förtydligande. I den verkliga världen kan instruktionerna vara tvetydiga och svårtolkade av många anledningar. Till exempel, när en användare ber en robot att hitta och hämta "porslinsmuggen", kan muggen vara både i köksskåpet eller på matsalsbordet, beroende på om den är ren eller full (semantiska oklarheter). Dessutom kan det finnas flera muggar på samma plats, och roboten kan behöva disambiguera dem genom att ställa följdfrågor baserade på deras utmärkande egenskaper, såsom färg eller rumsliga relationer till andra objekt (visuella tvetydigheter).

När tvetydigheter löses, har tidigare arbeten tagit itu med detta problem genom att endast disambiguera objekten i robotens befintliga vy och inte fokuserat på sådana som ligger utanför robotens synvinkel. För att lösa semantiska tvetydigheter orsakade av objekt som eventuellt finns på flera platser, presenterar vi ett nytt tillvägagångssätt där vi resonerar om objektens semantiska egenskaper. Å andra sidan, medan man hanterar tvetydiga instruktioner orsakade av flera liknande objekt på samma plats, ber de flesta  befintliga systemen att användarna upprepar sina förfrågningar med antagandet att roboten är bekant med alla objekt i miljön. För att poängtera denna begränsning och lösa visuella oklarheter, presenterar vi ett interaktivt system som introducerar uppföljande förtydliganden för att disambiguera de beskrivna objekten med hjälp av den information som roboten kunde förstå från begäran och objekten i miljön som är kända för robot.

För att sammanfatta, i denna avhandling ämnar vi att lösa semantiska och visuella oklarheter för att vägleda en robots sökning efter beskrivna objekt specificerade i användarinstruktioner. Med semantisk disambiguering strävar vi efter att hitta det beskrivna objektets placering i ett helt hushåll. Detta genom att använda objektets semantik för att skapa klargörande frågor när det finns oklarheter. Efter att ha identifierat objektplaceringar, med visuell disambiguering, strävar vi efter att identifiera det angivna objektet bland flera liknande objekt placerade i samma utrymme. För att uppnå detta föreslår vi ett  tillvägagångssätt i flera steg där roboten först identifierar de objekt som passar användarens beskrivning, och om det finns flera objekt ställer roboten följdfrågor för att förtydliga genom att beskriva varje potentiellt målobjekt med dess rumsliga relationer till andra föremål. Våra resultat betonar betydelsen av semantisk och visuell disambiguering för att uppnå framgångsrik slutförande av uppgifter för samarbetet mellan människa och robot.

sted, utgiver, år, opplag, sider
KTH Royal Institute of Technology, 2023. s. 45
Serie
TRITA-EECS-AVL ; 2023:16
HSV kategori
Forskningsprogram
Datalogi
Identifikatorer
urn:nbn:se:kth:diva-324232 (URN)978-91-8040-491-4 (ISBN)
Disputas
2023-03-17, Zoom: https://kth-se.zoom.us/j/66504888477, F3, Lindstedtsvägen 26, Stockholm, 14:00 (engelsk)
Opponent
Veileder
Merknad

QC 20230223

Tilgjengelig fra: 2023-02-23 Laget: 2023-02-23 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Patel, M., Dogan, F. I., Zeng, Z., Baraka, K. & Chernova, S. (2023). Semantic Scene Understanding for Human-Robot Interaction. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 941-943). Association for Computing Machinery (ACM)
Åpne denne publikasjonen i ny fane eller vindu >>Semantic Scene Understanding for Human-Robot Interaction
Vise andre…
2023 (engelsk)Inngår i: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, s. 941-943Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Service robots will be co-located with human users in an unstructured human-centered environment and will benefit from understanding the user's daily activities, preferences, and needs towards fully assisting them. This workshop aims to explore how abstract semantic knowledge of the user's environment can be used as a context in understanding and grounding information regarding the user's instructions, preferences, habits, and needs. While object semantics have primarily been investigated for robotics in the perception and manipulation domain, recent works have shown the benefits of semantic modeling in a Human-Robot Interaction (HRI) context toward understanding and assisting human users. This workshop focuses on semantic information that can be useful in generalizing and interpreting user instructions, modeling user activities, anticipating user needs, and making the internal reasoning processes of a robot more interpretable to a user. Therefore, the workshop builds on topics from prior workshops such as Learning in HRI1, behavior adaptation for assistance2, and learning from humans3 and aims at facilitating cross-pollination across these domains through a common thread of utilizing abstract semantics of the physical world towards robot autonomy in assistive applications. We envision the workshop to touch on research areas such as unobtrusive learning from observations, preference learning, continual learning, enhancing the transparency of autonomous robot behavior, and user adaptation. The workshop aims to gather researchers working on these areas and provide fruitful discussions towards autonomous assistive robots that can learn and ground scene semantics for enhancing HRI.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2023
Emneord
human-centered autonomy, robot learning, scene semantics
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-333369 (URN)10.1145/3568294.3579960 (DOI)001054975700211 ()2-s2.0-85150420639 (Scopus ID)
Konferanse
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Merknad

Part of ISBN 9781450399708

QC 20230801

Tilgjengelig fra: 2023-08-01 Laget: 2023-08-01 Sist oppdatert: 2025-02-05bibliografisk kontrollert
Dogan, F. I., Torre, I. & Leite, I. (2022). Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation. In: ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022 (pp. 461-469). IEEE Computer Society
Åpne denne publikasjonen i ny fane eller vindu >>Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation
2022 (engelsk)Inngår i: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, s. 461-469Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2022
Emneord
Follow-Up Clarifications, Referring Expressions, Resolving Ambiguities, Clarification, Robots, Follow up, Follow-up clarification, Human robots, Interactive system, Real world environments, Task failures, User study, Clarifiers
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-322409 (URN)10.1109/HRI53351.2022.9889368 (DOI)000869793600051 ()2-s2.0-85127064182 (Scopus ID)
Konferanse
17th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, 7 March 2022 through 10 March 2022
Merknad

QC 20221214

Tilgjengelig fra: 2022-12-14 Laget: 2022-12-14 Sist oppdatert: 2025-02-05bibliografisk kontrollert
Panesar, A., Dogan, F. I. & Leite, I. (2022). Improving Visual Question Answering by Leveraging Depth and Adapting Explainability. In: 2022 31St Ieee International Conference On Robot And Human Interactive Communication (Ieee Ro-Man 2022): . Paper presented at 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) - Social, Asocial, and Antisocial Robots, AUG 29-SEP 02, 2022, Napoli, ITALY (pp. 252-259). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Improving Visual Question Answering by Leveraging Depth and Adapting Explainability
2022 (engelsk)Inngår i: 2022 31St Ieee International Conference On Robot And Human Interactive Communication (Ieee Ro-Man 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 252-259Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

During human-robot conversation, it is critical for robots to be able to answer users' questions accurately and provide a suitable explanation for why they arrive at the answer they provide. Depth is a crucial component in producing more intelligent robots that can respond correctly as some questions might rely on spatial relations within the scene, for which 2D RGB data alone would be insufficient. Due to the lack of existing depth datasets for the task of VQA, we introduce a new dataset, VQA-SUNRGBD. When we compare our proposed model on this RGB-D dataset against the baseline VQN network on RGB data alone, we show that ours outperforms, particularly in questions relating to depth such as asking about the proximity of objects and relative positions of objects to one another. We also provide Grad-CAM activations to gain insight regarding the predictions on depth-related questions and find that our method produces better visual explanations compared to Grad-CAM on RGB data. To our knowledge, this work is the first of its kind to leverage depth and an explainability module to produce an explainable Visual Question Answering (VQA) system.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2022
Emneord
Visual Question Answering, Leveraging Depth, Explainability
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-322304 (URN)10.1109/RO-MAN53752.2022.9900586 (DOI)000885903300037 ()2-s2.0-85140744461 (Scopus ID)
Konferanse
31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) - Social, Asocial, and Antisocial Robots, AUG 29-SEP 02, 2022, Napoli, ITALY
Merknad

QC 20221212

Part of proceedings: ISBN 978-1-7281-8859-1

Tilgjengelig fra: 2022-12-12 Laget: 2022-12-12 Sist oppdatert: 2022-12-15bibliografisk kontrollert
Iovino, M., Dogan, F. I., Leite, I. & Smith, C. (2022). Interactive Disambiguation for Behavior Tree Execution. In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids): . Paper presented at 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Interactive Disambiguation for Behavior Tree Execution
2022 (engelsk)Inngår i: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), Institute of Electrical and Electronics Engineers (IEEE) , 2022Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Abstract:In recent years, robots are used in an increasing variety of tasks, especially by small- and medium sized enterprises. These tasks are usually fast-changing, they have a collaborative scenario and happen in unpredictable environments with possible ambiguities. It is important to have methods capable of generating robot programs easily, that are made as general as possible by handling uncertainties. We present a system that integrates a method to learn Behavior Trees (BTs) from demonstration for pick and place tasks, with a framework that uses verbal interaction to ask follow-up clarification questions to resolve ambiguities. During the execution of a task, the system asks for user input when there is need to disambiguate an object in the scene, i.e. when the targets of the task are objects of a same type that are present in multiple instances. The integrated system is demonstrated on different scenarios of a pick and place task, with increasing level of ambiguities. The code used for this paper is made publicly available 1 1 https://github.com/matiov/disambiguate-BT-execution.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2022
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-323057 (URN)10.1109/Humanoids53995.2022.10000088 (DOI)000925894300011 ()2-s2.0-85146320020 (Scopus ID)
Konferanse
2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)
Merknad

QC 20230116

Tilgjengelig fra: 2023-01-12 Laget: 2023-01-12 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Dogan, F. I. (2021). Social Robots That Understand Natural Language Instructions and Resolve Ambiguities. In: RSS Pioneers 2021 - Held in conjunction with the main Robotics: Science and Systems (RSS) Conference, 2021: . Paper presented at Robotics: Science and Systems (RSS) Pioneers Workshop 2021.
Åpne denne publikasjonen i ny fane eller vindu >>Social Robots That Understand Natural Language Instructions and Resolve Ambiguities
2021 (engelsk)Inngår i: RSS Pioneers 2021 - Held in conjunction with the main Robotics: Science and Systems (RSS) Conference, 2021, 2021Konferansepaper, Publicerat paper (Fagfellevurdert)
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-299444 (URN)
Konferanse
Robotics: Science and Systems (RSS) Pioneers Workshop 2021
Merknad

QC 20210811

Tilgjengelig fra: 2021-08-09 Laget: 2021-08-09 Sist oppdatert: 2025-02-05bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1733-7019