kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Scalable Methods for Developing Interlocutor-aware Embodied Conversational Agents: Data Collection, Behavior Modeling, and Evaluation Methods
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-3687-6189
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This work presents several methods, tools, and experiments that contribute to the development of interlocutor-aware Embodied Conversational Agents (ECAs). Interlocutor-aware ECAs take the interlocutor's behavior into consideration when generating their own non-verbal behaviors. This thesis targets the development of such adaptive ECAs by identifying and contributing to three important and related topics:

1) Data collection methods are presented, both for large scale crowdsourced data collection and in-lab data collection with a large number of sensors in a clinical setting. Experiments show that experts deemed dialog data collected using a crowdsourcing method to be better for dialog generation purposes than dialog data from other commonly used sources. 2) Methods for behavior modeling are presented, where machine learning models are used to generate facial gestures for ECAs. Both methods for single speaker and interlocutor-aware generation are presented. 3) Evaluation methods are explored and both third-party evaluation of generated gestures and interaction experiments of interlocutor-aware gestures generation are being discussed. For example, an experiment is carried out investigating the social influence of a mimicking social robot. Furthermore, a method for more efficient perceptual experiments is presented. This method is validated by replicating a previously conducted perceptual experiment on virtual agents, and shows that the results obtained using this new method provide similar insights (in fact, it provided more insights) into the data, simultaneously being more efficient in terms of time evaluators needed to spend participating in the experiment. A second study compared the difference between performing subjective evaluations of generated gestures in the lab vs. using crowdsourcing, and showed no difference between the two settings. A special focus in this thesis is given to using scalable methods, which allows for being able to efficiently and rapidly collect interaction data from a broad range of people and efficiently evaluate results produced by the machine learning methods. This in turn allows for fast iteration when developing interlocutor-aware ECAs behaviors.

Abstract [sv]

Det här arbetet presenterar ett flertal metoder, verktyg och experiment som alla bidrar till utvecklingen av motparts-medvetna förkloppsligade konversationella agenter, dvs agenter som kommunicerar med språk, har en kroppslig representation (avatar eller robot) och tar motpartens beteenden i beaktande när de genererar sina egna icke-verbala beteenden. Den här avhandlingen ämnar till att bidra till utvecklingen av sådana agenter genom att identifiera och bidra till tre viktiga områden:

Datainstamlingsmetoder  både för storskalig datainsamling med hjälp av så kallade "crowdworkers" (en stor mängd personer på internet som används för att lösa ett problem) men även i laboratoriemiljö med ett stort antal sensorer. Experiment presenteras som visar att t.ex. dialogdata som samlats in med hjälp av crowdworkers är bedömda som bättre ur dialoggenereringspersiktiv av en grupp experter än andra vanligt använda datamängder som används inom dialoggenerering. 2) Metoder för beteendemodellering, där maskininlärningsmodeller används för att generera ansiktsgester. Såväl metoder för att generera ansiktsgester för en ensam agent och för motparts-medvetna agenter presenteras, tillsammans med experiment som validerar deras funktionalitet. Vidare presenteras även ett experiment som undersöker en agents sociala påverkan på sin motpart då den imiterar ansiktsgester hos motparten medan de samtalar. 3) Evalueringsmetoder är utforskade och en metod för mer effektiva perceptuella experiment presenteras. Metoden är utvärderad genom att återskapa ett tidigare genomfört experiment med virtuella agenter, och visar att resultaten som fås med denna nya metod ger liknande insikter (den ger faktiskt fler insikter), samtidigt som den är effektivare när det kommer till hur mycket tid utvärderarna behövde spendera. En andra studie studerar skillnaden mellan att utföra subjektiva utvärderingar av genererade gester i en laboratoriemiljö jämfört med att använda crowdworkers, och visade att ingen skillnad kunde uppmätas. Ett speciellt fokus ligger på att använda skalbara metoder, då detta möjliggör effektiv och snabb insamling av mångfasetterad interaktionsdata från många olika människor samt evaluaring av de beteenden som genereras från maskininlärningsmodellerna, vilket i sin tur möjliggör snabb iterering i utvecklingen.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2022. , p. 77
Series
TRITA-EECS-AVL ; 2022:15
Keywords [en]
non-verbal behavior generation, interlocutor-aware, data collection, behavior modeling, evaluation methods
National Category
Computer Systems
Research subject
Speech and Music Communication
Identifiers
URN: urn:nbn:se:kth:diva-309467ISBN: 978-91-8040-151-7 (print)OAI: oai:DiVA.org:kth-309467DiVA, id: diva2:1642118
Public defence
2022-03-25, U1, https://kth-se.zoom.us/j/62813774919, Brinellvägen 26, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20220307

Available from: 2022-03-07 Created: 2022-03-03 Last updated: 2022-06-25Bibliographically approved
List of papers
1. Crowdsourced Multimodal Corpora Collection Tool
Open this publication in new window or tab >>Crowdsourced Multimodal Corpora Collection Tool
Show others...
2018 (English)In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, 2018, p. 728-734Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, more and more multimodal corpora have been created. To our knowledge there is no publicly available tool which allows for acquiring controlled multimodal data of people in a rapid and scalable fashion. We therefore are proposing (1) a novel tool which will enable researchers to rapidly gather large amounts of multimodal data spanning a wide demographic range, and (2) an example of how we used this tool for corpus collection of our "Attentive listener'' multimodal corpus. The code is released under an Apache License 2.0 and available as an open-source repository, which can be found at https://github.com/kth-social-robotics/multimodal-crowdsourcing-tool. This tool will allow researchers to set-up their own multimodal data collection system quickly and create their own multimodal corpora. Finally, this paper provides a discussion about the advantages and disadvantages with a crowd-sourced data collection tool, especially in comparison to a lab recorded corpora.

Place, publisher, year, edition, pages
Paris: , 2018
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-230236 (URN)000725545000117 ()2-s2.0-85059908776 (Scopus ID)
Conference
The Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Note

Part of proceedings ISBN 979-10-95546-00-9

QC 20180618

Available from: 2018-06-13 Created: 2018-06-13 Last updated: 2022-11-09Bibliographically approved
2. Crowdsourcing a self-evolving dialog graph
Open this publication in new window or tab >>Crowdsourcing a self-evolving dialog graph
Show others...
2019 (English)In: CUI '19: Proceedings of the 1st International Conference on Conversational User Interfaces, Association for Computing Machinery (ACM), 2019, article id 14Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a crowdsourcing-based approach for collecting dialog data for a social chat dialog system, which gradually builds a dialog graph from actual user responses and crowd-sourced system answers, conditioned by a given persona and other instructions. This approach was tested during the second instalment of the Amazon Alexa Prize 2018 (AP2018), both for the data collection and to feed a simple dialog system which would use the graph to provide answers. As users interacted with the system, a graph which maintained the structure of the dialogs was built, identifying parts where more coverage was needed. In an ofine evaluation, we have compared the corpus collected during the competition with other potential corpora for training chatbots, including movie subtitles, online chat forums and conversational data. The results show that the proposed methodology creates data that is more representative of actual user utterances, and leads to more coherent and engaging answers from the agent. An implementation of the proposed method is available as open-source code.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Series
ACM International Conference Proceeding Series
Keywords
Crowdsourcing, Datasets, Dialog systems, Human-computer interaction
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-266061 (URN)10.1145/3342775.3342790 (DOI)000525446900014 ()2-s2.0-85075882531 (Scopus ID)
Conference
1st International Conference on Conversational User Interfaces, CUI 2019; Dublin; Ireland; 22 August 2019 through 23 August 2019
Note

QC 20200114

Part of ISBN 9781450371872

Available from: 2020-01-14 Created: 2020-01-14 Last updated: 2024-10-18Bibliographically approved
3. Multimodal Capture of Patient Behaviour for Improved Detection of Early Dementia: Clinical Feasibility and Preliminary Results
Open this publication in new window or tab >>Multimodal Capture of Patient Behaviour for Improved Detection of Early Dementia: Clinical Feasibility and Preliminary Results
Show others...
2021 (English)In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 3, article id 642633Article in journal (Refereed) Published
Abstract [en]

Non-invasive automatic screening for Alzheimer's disease has the potential to improve diagnostic accuracy while lowering healthcare costs. Previous research has shown that patterns in speech, language, gaze, and drawing can help detect early signs of cognitive decline. In this paper, we describe a highly multimodal system for unobtrusively capturing data during real clinical interviews conducted as part of cognitive assessments for Alzheimer's disease. The system uses nine different sensor devices (smartphones, a tablet, an eye tracker, a microphone array, and a wristband) to record interaction data during a specialist's first clinical interview with a patient, and is currently in use at Karolinska University Hospital in Stockholm, Sweden. Furthermore, complementary information in the form of brain imaging, psychological tests, speech therapist assessment, and clinical meta-data is also available for each patient. We detail our data-collection and analysis procedure and present preliminary findings that relate measures extracted from the multimodal recordings to clinical assessments and established biomarkers, based on data from 25 patients gathered thus far. Our findings demonstrate feasibility for our proposed methodology and indicate that the collected data can be used to improve clinical assessments of early dementia.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
Alzheimer, mild cognitive impairment, multimodal prediction, speech, gaze, pupil dilation, thermal camera, pen motion
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-303883 (URN)10.3389/fcomp.2021.642633 (DOI)000705498300001 ()2-s2.0-85115692731 (Scopus ID)
Note

QC 20211022

Available from: 2021-10-22 Created: 2021-10-22 Last updated: 2025-02-07Bibliographically approved
4. Learning Non-verbal Behavior for a Social Robot from YouTube Videos
Open this publication in new window or tab >>Learning Non-verbal Behavior for a Social Robot from YouTube Videos
2019 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Non-verbal behavior is crucial for positive perception of humanoid robots. If modeled well it can improve the interaction and leave the user with a positive experience, on the other hand, if it is modelled poorly it may impede the interaction and become a source of distraction. Most of the existing work on modeling non-verbal behavior show limited variability due to the fact that the models employed are deterministic and the generated motion can be perceived as repetitive and predictable. In this paper, we present a novel method for generation of a limited set of facial expressions and head movements, based on a probabilistic generative deep learning architecture called Glow. We have implemented a workflow which takes videos directly from YouTube, extracts relevant features, and trains a model that generates gestures that can be realized in a robot without any post processing. A user study was conducted and illustrated the importance of having any kind of non-verbal behavior while most differences between the ground truth, the proposed method, and a random control were not significant (however, the differences that were significant were in favor of the proposed method).

Keywords
Facial expressions, non-verbal behavior, generative models, neural network, head movement, social robotics
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-261242 (URN)
Conference
ICDL-EpiRob Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions, Oslo, Norway, August 19, 2019
Funder
Swedish Foundation for Strategic Research , RIT15-0107
Note

QC 20191007

Available from: 2019-10-03 Created: 2019-10-03 Last updated: 2024-03-18Bibliographically approved
5. Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
Open this publication in new window or tab >>Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
2020 (English)In: IVA '20: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2020Conference paper, Published paper (Refereed)
Abstract [en]

To enable more natural face-to-face interactions, conversational agents need to adapt their behavior to their interlocutors. One key aspect of this is generation of appropriate non-verbal behavior for the agent, for example, facial gestures, here defined as facial expressions and head movements. Most existing gesture-generating systems do not utilize multi-modal cues from the interlocutor when synthesizing non-verbal behavior. Those that do, typically use deterministic methods that risk producing repetitive and non-vivid motions. In this paper, we introduce a probabilistic method to synthesize interlocutor-aware facial gestures ś represented by highly expressive FLAME parameters ś in dyadic conversations. Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the different modalities in the synthesized output. The results show that the model successfully leverages the input from the interlocutor to generate more appropriate behavior. Videos, data, and code are available at: https://jonepatr.github.io/lets_face_it/

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2020
Keywords
non-verbal behavior, machine learning, facial expressions, adaptive agents
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-290561 (URN)10.1145/3383652.3423911 (DOI)000728153600051 ()2-s2.0-85096990068 (Scopus ID)
Conference
IVA '20: ACM International Conference on Intelligent Virtual Agents, Virtual Event, Scotland, UK, October 20-22, 2020
Funder
Swedish Foundation for Strategic Research , RIT15-0107Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20210222

Available from: 2021-02-18 Created: 2021-02-18 Last updated: 2022-09-23Bibliographically approved
6. Mechanical Chameleons: Evaluating the effects of a social robot’snon-verbal behavior on social influence
Open this publication in new window or tab >>Mechanical Chameleons: Evaluating the effects of a social robot’snon-verbal behavior on social influence
Show others...
2021 (English)In: Proceedings of SCRITA 2021, a workshop at IEEE RO-MAN 2021, 2021Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a pilot study which investigates how non-verbal behavior affects social influence in social robots. We also present a modular system which is capable of controlling the non-verbal behavior based on the interlocutor's facial gestures (head movements and facial expressions) in real time, and a study investigating whether three different strategies for facial gestures ("still", "natural movement", i.e. movements recorded from another conversation, and "copy", i.e. mimicking the user with a four second delay) has any affect on social influence and decision making in a "survival task". Our preliminary results show there was no significant difference between the three conditions, but this might be due to among other things a low number of study participants (12). 

National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-309464 (URN)
Conference
Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA, 12 August, 2021
Funder
Swedish Foundation for Strategic Research , RIT15-0107Swedish Research Council, 2018-05409
Note

QC 20220308

Available from: 2022-03-03 Created: 2022-03-03 Last updated: 2022-06-25Bibliographically approved
7. HEMVIP: Human Evaluation of Multiple Videos in Parallel
Open this publication in new window or tab >>HEMVIP: Human Evaluation of Multiple Videos in Parallel
Show others...
2021 (English)In: ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction, New York, NY, United States: Association for Computing Machinery (ACM) , 2021, p. 707-711Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

In many research areas, for example motion and gesture generation, objective measures alone do not provide an accurate impression of key stimulus traits such as perceived quality or appropriateness. The gold standard is instead to evaluate these aspects through user studies, especially subjective evaluations of video stimuli. Common evaluation paradigms either present individual stimuli to be scored on Likert-type scales, or ask users to compare and rate videos in a pairwise fashion. However, the time and resources required for such evaluations scale poorly as the number of conditions to be compared increases. Building on standards used for evaluating the quality of multimedia codecs, this paper instead introduces a framework for granular rating of multiple comparable videos in parallel. This methodology essentially analyses all condition pairs at once. Our contributions are 1) a proposed framework, called HEMVIP, for parallel and granular evaluation of multiple video stimuli and 2) a validation study confirming that results obtained using the tool are in close agreement with results of prior studies using conventional multiple pairwise comparisons.

Place, publisher, year, edition, pages
New York, NY, United States: Association for Computing Machinery (ACM), 2021
Keywords
evaluation paradigms, video evaluation, conversational agents, gesture generation
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-309462 (URN)10.1145/3462244.3479957 (DOI)2-s2.0-85113672097 (Scopus ID)
Conference
International Conference on Multimodal Interaction Montreal, Canada. October 18-22nd, 2021
Funder
Swedish Foundation for Strategic Research, RIT15-0107Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Part of proceedings: ISBN 978-1-4503-8481-0

QC 20220309

Available from: 2022-03-03 Created: 2022-03-03 Last updated: 2023-01-18Bibliographically approved
8. Can we trust online crowdworkers? : Comparing online and offline participants in a preference test of virtual agents.
Open this publication in new window or tab >>Can we trust online crowdworkers? : Comparing online and offline participants in a preference test of virtual agents.
2020 (English)In: IVA '20: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM) , 2020Conference paper, Published paper (Refereed)
Abstract [en]

Conducting user studies is a crucial component in many scientific fields. While some studies require participants to be physically present, other studies can be conducted both physically (e.g. in-lab)and online (e.g. via crowdsourcing). Inviting participants to the lab can be a time-consuming and logistically difficult endeavor, not to mention that sometimes research groups might not be able to run in-lab experiments, because of, for example, a pandemic. Crowd-sourcing platforms such as Amazon Mechanical Turk (AMT) or prolific can therefore be a suitable alternative to run certain experiments, such as evaluating virtual agents. Although previous studies investigated the use of crowdsourcing platforms for running experiments, there is still uncertainty as to whether the results are reliable for perceptual studies. Here we replicate a previous experiment where participants evaluated a gesture generation model for virtual agents. The experiment is conducted across three participant poolsś in-lab, Prolific, andAMTś having similar demographics across the in-lab participants and the Prolific platform. Our results show no difference between the three participant pools in regards to their evaluations of the gesture generation models and their reliability scores. The results indicate that online platforms can successfully be used for perceptual evaluations of this kind.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2020
Keywords
user studies, online participants, attentiveness
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-290562 (URN)10.1145/3383652.3423860 (DOI)000728153600002 ()2-s2.0-85096979963 (Scopus ID)
Conference
IVA '20: ACM International Conference on Intelligent Virtual Agents, Virtual Event, Scotland, UK, October 20-22, 2020
Funder
Swedish Foundation for Strategic Research , RIT15-0107Wallenberg AI, Autonomous Systems and Software Program (WASP), CorSA
Note

OQ 20211109

Part of Proceedings: ISBN 978-145037586-3

Taras Kucherenko and Patrik Jonell contributed equally to this research.

Available from: 2021-02-18 Created: 2021-02-18 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

fulltext(10344 kB)1256 downloads
File information
File name FULLTEXT01.pdfFile size 10344 kBChecksum SHA-512
91f3d8a856e2fe4fe88e428a01127cd386ebac433f1dc333ce6ba89ce4feefeda7801b3d83bddfccb1a36a4e5a3a61d4b15cf293befe695af04ac1460129a3a7
Type fulltextMimetype application/pdf

Authority records

Jonell, Patrik

Search in DiVA

By author/editor
Jonell, Patrik
By organisation
Speech, Music and Hearing, TMH
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 1257 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1239 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf