kth.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Biography [eng]

I am a Ph.D. student in Machine Learning for Social Robotics at KTH Royal Institute of Technology in Stockholm. My main supervisor is Hedvig Kjellström and co-supervisors are Gustav Eje Henter and Iolanda Leite.

My research is on generative models of non-verbal behavior, such as hand gestures and facial expressions. You can watch me explaining it here. My latest project can be found on this page.

I am working in the HealthTech project EACare, where we aim to develop a robot system to detect early signs of Dementia from the communicative behavior. My part is non-verbal behavior generation.

Publications (10 of 27) Show all publications
Kucherenko, T., Wolfert, P., Yoon, Y., Viegas, C., Nikolov, T., Tsakov, M. & Henter, G. E. (2024). Evaluating Gesture Generation in a Large-scale Open Challenge: The GENEA Challenge 2022. ACM Transactions on Graphics, 43(3), Article ID 32.
Open this publication in new window or tab >>Evaluating Gesture Generation in a Large-scale Open Challenge: The GENEA Challenge 2022
Show others...
2024 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 43, no 3, article id 32Article in journal (Refereed) Published
Abstract [en]

This article reports on the second GENEA Challenge to benchmark datadriven automatic co-speech gesture generation. Participating teams used the same speech and motion dataset to build gesture-generation systems. Motion generated by all these systems was rendered to video using a standardised visualisation pipeline and evaluated in several large, crowdsourced user studies. Unlike when comparing different research articles, differences in results are here only due to differences betweenmethods, enabling direct comparison between systems. The dataset was based on 18 hours of fullbody motion capture, including fingers, of different persons engaging in a dyadic conversation. Ten teams participated in the challenge across two tiers: full-body and upper-body gesticulation. For each tier, we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal. Our evaluations decouple human-likeness from gesture appropriateness, which has been a difficult problem in the field. The evaluation results show some synthetic gesture conditions being rated as significantly more human-like than 3D human motion capture. To the best of our knowledge, this has not been demonstrated before. On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings. We also find that conventional objective metrics do not correlate well with subjective human-likeness ratings in this large evaluation. The one exception is the Frechet gesture distance (FGD), which achieves a Kendall's tau rank correlation of around -0.5. Based on the challenge results we formulate numerous recommendations for system building and evaluation.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Animation, gesture generation, embodied, conversational agents, evaluation paradigms
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-352263 (URN)10.1145/3656374 (DOI)001265558400008 ()2-s2.0-85192703805 (Scopus ID)
Note

QC 20240827

Available from: 2024-08-27 Created: 2024-08-27 Last updated: 2024-09-06Bibliographically approved
Yoon, Y., Kucherenko, T., Delbosc, A., Nagy, R., Nikolov, T. & Henter, G. E. (2024). GENEA Workshop 2024: The 5th Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents. In: PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2024: . Paper presented at Companion International Conference on Multimodal Interaction, NOV 04-08, 2024, San Jose, COSTA RICA (pp. 694-695). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>GENEA Workshop 2024: The 5th Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
Show others...
2024 (English)In: PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2024, Association for Computing Machinery (ACM) , 2024, p. 694-695Conference paper, Published paper (Refereed)
Abstract [en]

Non-verbal behavior offers significant benefits for embodied agents in human interactions. Despite extensive research on the creation of non-verbal behaviors, the field lacks a standardized benchmarking practice. Researchers rarely compare their findings with previous studies, and when they do, the comparisons are often not aligned with other methodologies. The GENEA Workshop 2024 aims to unite the community to discuss major challenges and solutions, and to determine the most effective ways to advance the field.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
behavior synthesis, gesture generation, datasets, evaluation
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-362970 (URN)10.1145/3678957.3688818 (DOI)001433669800083 ()2-s2.0-85212592892 (Scopus ID)
Conference
Companion International Conference on Multimodal Interaction, NOV 04-08, 2024, San Jose, COSTA RICA
Note

Part of ISBN 979-8-4007-0462-8

QC 20250430

Available from: 2025-04-30 Created: 2025-04-30 Last updated: 2025-06-03Bibliographically approved
Kucherenko, T., Nagy, R., Yoon, Y., Woo, J., Nikolov, T., Tsakov, M. & Henter, G. E. (2023). The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic setings. In: Proceedings Of The 25Th International Conference On Multimodal Interaction, Icmi 2023: . Paper presented at 25th International Conference on Multimodal Interaction (ICMI), OCT 09-13, 2023, Sorbonne Univ, Paris, France. (pp. 792-801). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic setings
Show others...
2023 (English)In: Proceedings Of The 25Th International Conference On Multimodal Interaction, Icmi 2023, Association for Computing Machinery (ACM) , 2023, p. 792-801Conference paper, Published paper (Refereed)
Abstract [en]

This paper reports on the GENEA Challenge 2023, in which participating teams built speech-driven gesture-generation systems using the same speech and motion dataset, followed by a joint evaluation. This year's challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the interlocutor. We evaluated 12 submissions and 2 baselines together with held-out motion-capture data in several large-scale user studies. The studies focused on three aspects: 1) the human-likeness of the motion, 2) the appropriateness of the motion for the agent's own speech whilst controlling for the human-likeness of the motion, and 3) the appropriateness of the motion for the behaviour of the interlocutor in the interaction, using a setup that controls for both the human-likeness of the motion and the agent's own speech. We found a large span in human-likeness between challenge submissions, with a few systems rated close to human mocap. Appropriateness seems far from being solved, with most submissions performing in a narrow range slightly above chance, far behind natural motion. The efect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance. Interestingly, a dyadic system being highly appropriate for agent speech does not necessarily imply high appropriateness for the interlocutor. Additional material is available via the project website at svito-zar.github.io/GENEAchallenge2023/.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
gesture generation, embodied conversational agents, evaluation paradigms, dyadic interaction, interlocutor awareness
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-343599 (URN)10.1145/3577190.3616120 (DOI)001147764700098 ()2-s2.0-85170511127 (Scopus ID)
Conference
25th International Conference on Multimodal Interaction (ICMI), OCT 09-13, 2023, Sorbonne Univ, Paris, France.
Note

Part of ISBN: 979-840070055-2

QC 20240223

Available from: 2024-02-23 Created: 2024-02-23 Last updated: 2025-02-07Bibliographically approved
Kucherenko, T., Nagy, R., Neff, M., Kjellström, H. & Henter, G. E. (2022). Multimodal analysis of the predictability of hand-gesture properties. In: AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems: . Paper presented at 21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022 (pp. 770-779). ACM Press
Open this publication in new window or tab >>Multimodal analysis of the predictability of hand-gesture properties
Show others...
2022 (English)In: AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, ACM Press, 2022, p. 770-779Conference paper, Published paper (Refereed)
Abstract [en]

Embodied conversational agents benefit from being able to accompany their speech with gestures. Although many data-driven approaches to gesture generation have been proposed in recent years, it is still unclear whether such systems can consistently generate gestures that convey meaning. We investigate which gesture properties (phase, category, and semantics) can be predicted from speech text and/or audio using contemporary deep learning. In extensive experiments, we show that gesture properties related to gesture meaning (semantics and category) are predictable from text features (time-aligned FastText embeddings) alone, but not from prosodic audio features, while rhythm-related gesture properties (phase) on the other hand can be predicted from audio features better than from text. These results are encouraging as they indicate that it is possible to equip an embodied agent with content-wise meaningful co-speech gestures using a machine-learning model.

Place, publisher, year, edition, pages
ACM Press, 2022
Keywords
embodied conversational agents, gesture generation, gesture analysis, gesture property
National Category
Computer Sciences Human Computer Interaction
Research subject
Computer Science; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-312470 (URN)10.5555/3535850.3535937 (DOI)2-s2.0-85134341889 (Scopus ID)
Conference
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, Auckland, New Zealand, May 9-13, 2022
Funder
Swedish Foundation for Strategic ResearchWallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg Foundation
Note

Part of proceedings ISBN: 9781450392136

QC 20220621

Available from: 2022-05-19 Created: 2022-05-19 Last updated: 2023-04-26Bibliographically approved
Nagy, R., Kucherenko, T., Moell, B., Abelho Pereira, A. T., Kjellström, H. & Bernardet, U. (2021). A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents. In: : . Paper presented at 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS)..
Open this publication in new window or tab >>A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents
Show others...
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Embodied conversational agents (ECAs) benefit from non-verbal behavior for natural and efficient interaction with users. Gesticulation – hand and arm movements accompanying speech – is an essential part of non-verbal behavior. Gesture generation models have been developed for several decades: starting with rule-based and ending with mainly data-driven methods. To date, recent end-to-end gesture generation methods have not been evaluated in areal-time interaction with users. We present a proof-of-concept

framework, which is intended to facilitate evaluation of modern gesture generation models in interaction. We demonstrate an extensible open-source framework that contains three components: 1) a 3D interactive agent; 2) a chatbot back-end; 3) a gesticulating system. Each component can be replaced,

making the proposed framework applicable for investigating the effect of different gesturing models in real-time interactions with different communication modalities, chatbot backends, or different agent appearances. The code and video are available at the project page https://nagyrajmund.github.io/project/gesturebot.

Keywords
conversational embodied agents; non-verbal behavior synthesis
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-304616 (URN)
Conference
20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Funder
Swedish Foundation for Strategic Research, RIT15-0107
Note

QC 20211130

Not duplicate with DiVA 1653872

Available from: 2021-11-08 Created: 2021-11-08 Last updated: 2022-06-25Bibliographically approved
Nagy, R., Kucherenko, T., Moell, B., Abelho Pereira, A. T., Kjellström, H. & Bernardet, U. (2021). A framework for integrating gesture generation models into interactive conversational agents. In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS: . Paper presented at 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 3 May 2021 through 7 May 2021 (pp. 1767-1769). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Open this publication in new window or tab >>A framework for integrating gesture generation models into interactive conversational agents
Show others...
2021 (English)In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) , 2021, p. 1767-1769Conference paper, Published paper (Refereed)
Abstract [en]

Embodied conversational agents (ECAs) benefit from non-verbal behavior for natural and efficient interaction with users. Gesticulation - hand and arm movements accompanying speech - is an essential part of non-verbal behavior. Gesture generation models have been developed for several decades: starting with rule-based and ending with mainly data-driven methods. To date, recent end to- end gesture generation methods have not been evaluated in a real-time interaction with users. We present a proof-of-concept framework, which is intended to facilitate evaluation of modern gesture generation models in interaction. We demonstrate an extensible open-source framework that contains three components: 1) a 3D interactive agent; 2) a chatbot backend; 3) a gesticulating system. Each component can be replaced, making the proposed framework applicable for investigating the effect of different gesturing models in real-time interactions with different communication modalities, chatbot backends, or different agent appearances. The code and video are available at the project page https://nagyrajmund.github.io/project/gesturebot. 

Place, publisher, year, edition, pages
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2021
Keywords
Conversational embodied agents, Non-verbal behavior synthesis, Multi agent systems, Open systems, Speech, Communication modalities, Conversational agents, Data-driven methods, Efficient interaction, Embodied conversational agent, Interactive agents, Open source frameworks, Real time interactions, Autonomous agents
National Category
Human Computer Interaction Computer Sciences
Identifiers
urn:nbn:se:kth:diva-311130 (URN)2-s2.0-85112311041 (Scopus ID)
Conference
20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 3 May 2021 through 7 May 2021
Note

Part of proceedings: ISBN 978-1-7138-3262-1

QC 20220425

Available from: 2022-04-25 Created: 2022-04-25 Last updated: 2023-01-17Bibliographically approved
Kucherenko, T., Jonell, P., Yoon, Y., Wolfert, P. & Henter, G. E. (2021). A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020. In: Proceedings IUI '21: 26th International Conference on Intelligent User Interfaces: . Paper presented at IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021 (pp. 11-21). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020
Show others...
2021 (English)In: Proceedings IUI '21: 26th International Conference on Intelligent User Interfaces, Association for Computing Machinery (ACM) , 2021, p. 11-21Conference paper, Published paper (Refereed)
Abstract [en]

Co-speech gestures, gestures that accompany speech, play an important role in human communication. Automatic co-speech gesture generation is thus a key enabling technology for embodied conversational agents (ECAs), since humans expect ECAs to be capable of multi-modal communication. Research into gesture generation is rapidly gravitating towards data-driven methods. Unfortunately, individual research efforts in the field are difficult to compare: there are no established benchmarks, and each study tends to use its own dataset, motion visualisation, and evaluation methodology. To address this situation, we launched the GENEA Challenge, a gesture-generation challenge wherein participating teams built automatic gesture-generation systems on a common dataset, and the resulting systems were evaluated in parallel in a large, crowdsourced user study using the same motion-rendering pipeline. Since differences in evaluation outcomes between systems now are solely attributable to differences between the motion-generation methods, this enables benchmarking recent approaches against one another in order to get a better impression of the state of the art in the field. This paper reports on the purpose, design, results, and implications of our challenge.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2021
Keywords
gesture generation, conversational agents, evaluation paradigms
National Category
Human Computer Interaction
Research subject
Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-296490 (URN)10.1145/3397481.3450692 (DOI)000747690200006 ()2-s2.0-85102546745 (Scopus ID)
Conference
IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021
Funder
Swedish Foundation for Strategic Research , RIT15-0107Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Part of Proceedings: ISBN 978-145038017-1

QC 20220303

Available from: 2021-06-05 Created: 2021-06-05 Last updated: 2022-06-25Bibliographically approved
Kucherenko, T. (2021). Developing and evaluating co-speech gesture-synthesis models for embodied conversational agents. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>Developing and evaluating co-speech gesture-synthesis models for embodied conversational agents
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

 A  large part of our communication is non-verbal:   humans use non-verbal behaviors to express various aspects of our state or intent.  Embodied artificial agents, such as virtual avatars or robots, should also use non-verbal behavior for efficient and pleasant interaction. A core part of non-verbal communication is gesticulation:  gestures communicate a large share of non-verbal content. For example, around 90\% of spoken utterances in descriptive discourse are accompanied by gestures. Since gestures are important, generating co-speech gestures has been an essential task in the Human-Agent Interaction (HAI) and Computer Graphics communities for several decades.  Evaluating the gesture-generating methods has been an equally important and equally challenging part of field development. Consequently, this thesis contributes to both the development and evaluation of gesture-generation models. 

This thesis proposes three deep-learning-based gesture-generation models. The first model is deterministic and uses only audio and generates only beat gestures.  The second model is deterministic and uses both audio and text, aiming to generate meaningful gestures.  A final model uses both audio and text and is probabilistic to learn the stochastic character of human gesticulation.  The methods have applications to both virtual agents and social robots. Individual research efforts in the field of gesture generation are difficult to compare, as there are no established benchmarks.  To address this situation, my colleagues and I launched the first-ever gesture-generation challenge, which we called the GENEA Challenge.  We have also investigated if online participants are as attentive as offline participants and found that they are both equally attentive provided that they are well paid.   Finally,  we developed a  system that integrates co-speech gesture-generation models into a real-time interactive embodied conversational agent.  This system is intended to facilitate the evaluation of modern gesture generation models in interaction. 

To further advance the development of capable gesture-generation methods, we need to advance their evaluation, and the research in the thesis supports an interpretation that evaluation is the main bottleneck that limits the field.  There are currently no comprehensive co-speech gesture datasets, which should be large, high-quality, and diverse. In addition, no strong objective metrics are yet available.  Creating speech-gesture datasets and developing objective metrics are highlighted as essential next steps for further field development.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2021. p. 47
Series
TRITA-EECS-AVL ; 2021:75
Keywords
Human-agent interaction, gesture generation, social robotics, conversational agents, non-verbal behavior, deep learning, machine learning
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-304618 (URN)978-91-8040-058-9 (ISBN)
Public defence
2021-12-07, Sal Kollegiesalen, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research , RIT15-0107
Note

QC 20211109

Available from: 2021-11-10 Created: 2021-11-08 Last updated: 2022-06-25Bibliographically approved
Kucherenko, T., Jonell, P., Yoon, Y., Wolfert, P., Yumak, Z. & Henter, G. E. (2021). GENEA Workshop 2021: The 2nd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents. In: Proceedings of ICMI '21: International Conference on Multimodal Interaction: . Paper presented at ICMI '21: International Conference on Multimodal Interaction, Montréal, QC, Canada, October 18-22, 2021 (pp. 872-873). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>GENEA Workshop 2021: The 2nd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
Show others...
2021 (English)In: Proceedings of ICMI '21: International Conference on Multimodal Interaction, Association for Computing Machinery (ACM) , 2021, p. 872-873Conference paper, Published paper (Refereed)
Abstract [en]

Embodied agents benefit from using non-verbal behavior when communicating with humans. Despite several decades of non-verbal behavior-generation research, there is currently no well-developed benchmarking culture in the field. For example, most researchers do not compare their outcomes with previous work, and if they do, they often do so in their own way which frequently is incompatible with others. With the GENEA Workshop 2021, we aim to bring the community together to discuss key challenges and solutions, and find the most appropriate ways to move the field forward.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2021
Keywords
behavior synthesis, datasets, evaluation, gesture generation, Behavior generation, Dataset, Embodied agent, Non-verbal behaviours, Behavioral research
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-313185 (URN)10.1145/3462244.3480983 (DOI)2-s2.0-85118969127 (Scopus ID)
Conference
ICMI '21: International Conference on Multimodal Interaction, Montréal, QC, Canada, October 18-22, 2021
Note

Part of proceedings ISBN 9781450384810

QC 20220602

Available from: 2022-06-02 Created: 2022-06-02 Last updated: 2022-06-25Bibliographically approved
Jonell, P., Yoon, Y., Wolfert, P., Kucherenko, T. & Henter, G. E. (2021). HEMVIP: Human Evaluation of Multiple Videos in Parallel. In: ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction: . Paper presented at International Conference on Multimodal Interaction Montreal, Canada. October 18-22nd, 2021 (pp. 707-711). New York, NY, United States: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>HEMVIP: Human Evaluation of Multiple Videos in Parallel
Show others...
2021 (English)In: ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction, New York, NY, United States: Association for Computing Machinery (ACM) , 2021, p. 707-711Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

In many research areas, for example motion and gesture generation, objective measures alone do not provide an accurate impression of key stimulus traits such as perceived quality or appropriateness. The gold standard is instead to evaluate these aspects through user studies, especially subjective evaluations of video stimuli. Common evaluation paradigms either present individual stimuli to be scored on Likert-type scales, or ask users to compare and rate videos in a pairwise fashion. However, the time and resources required for such evaluations scale poorly as the number of conditions to be compared increases. Building on standards used for evaluating the quality of multimedia codecs, this paper instead introduces a framework for granular rating of multiple comparable videos in parallel. This methodology essentially analyses all condition pairs at once. Our contributions are 1) a proposed framework, called HEMVIP, for parallel and granular evaluation of multiple video stimuli and 2) a validation study confirming that results obtained using the tool are in close agreement with results of prior studies using conventional multiple pairwise comparisons.

Place, publisher, year, edition, pages
New York, NY, United States: Association for Computing Machinery (ACM), 2021
Keywords
evaluation paradigms, video evaluation, conversational agents, gesture generation
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-309462 (URN)10.1145/3462244.3479957 (DOI)2-s2.0-85113672097 (Scopus ID)
Conference
International Conference on Multimodal Interaction Montreal, Canada. October 18-22nd, 2021
Funder
Swedish Foundation for Strategic Research, RIT15-0107Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Part of proceedings: ISBN 978-1-4503-8481-0

QC 20220309

Available from: 2022-03-03 Created: 2022-03-03 Last updated: 2023-01-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9838-8848

Search in DiVA

Show all publications

Profile pages

Profile Page