kth.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Inoue, K., Jiang, B., Ekstedt, E., Kawahara, T. & Skantze, G. (2024). Multilingual Turn-taking Prediction Using Voice Activity Projection. In: 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings: . Paper presented at Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024, Hybrid, May 20-25, 2024, Torino, Italy (pp. 11873-11883). European Language Resources Association (ELRA)
Open this publication in new window or tab >>Multilingual Turn-taking Prediction Using Voice Activity Projection
Show others...
2024 (English)In: 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings, European Language Resources Association (ELRA) , 2024, p. 11873-11883Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the application of voice activity projection (VAP), a predictive turn-taking model for spoken dialogue, on multilingual data, encompassing English, Mandarin, and Japanese. The VAP model continuously predicts the upcoming voice activities of participants in dyadic dialogue, leveraging a cross-attention Transformer to capture the dynamic interplay between participants. The results show that a monolingual VAP model trained on one language does not make good predictions when applied to other languages. However, a multilingual model, trained on all three languages, demonstrates predictive performance on par with monolingual models across all languages. Further analyses show that the multilingual model has learned to discern the language of the input signal. We also analyze the sensitivity to pitch, a prosodic cue that is thought to be important for turn-taking. Finally, we compare two different audio encoders, contrastive predictive coding (CPC) pre-trained on English, with a recent model based on multilingual wav2vec 2.0 (MMS).

Place, publisher, year, edition, pages
European Language Resources Association (ELRA), 2024
Keywords
Multilingual, Spoken Dialogue System, Turn-taking, Voice Activity Projection
National Category
Natural Language Processing General Language Studies and Linguistics Computer Sciences
Identifiers
urn:nbn:se:kth:diva-348790 (URN)2-s2.0-85195914079 (Scopus ID)
Conference
Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024, Hybrid, May 20-25, 2024, Torino, Italy
Projects
tmh_turntaking
Note

Part of ISBN 978-249381410-4

QC 20241028

Available from: 2024-06-27 Created: 2024-06-27 Last updated: 2025-02-01Bibliographically approved
Inoue, K., Jiang, B., Ekstedt, E., Kawahara, T. & Skantze, G. (2024). Real-time and Continuous Turn-taking Prediction Using Voice Activity Projection. In: : . Paper presented at The 14th International Workshop on Spoken Dialogue Systems Technology (IWSDS), Sapporo, Japan, March 4-6, 2024.
Open this publication in new window or tab >>Real-time and Continuous Turn-taking Prediction Using Voice Activity Projection
Show others...
2024 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

A demonstration of a real-time and continuous turn-taking prediction system is presented. The system is based on a voice activity projection (VAP) model, which directly maps dialogue stereo audio to future voice activities. The VAP model includes contrastive predictive coding (CPC) and self-attention transformers, followed by a cross-attention transformer. We examine the effect of the input context audio length and demonstrate that the proposed system can operate in real-time with CPU settings, with minimal performance degradation.

National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-359141 (URN)10.48550/arXiv.2401.04868 (DOI)
Conference
The 14th International Workshop on Spoken Dialogue Systems Technology (IWSDS), Sapporo, Japan, March 4-6, 2024
Projects
tmh_turntaking
Note

QC 20250325

Available from: 2025-01-27 Created: 2025-01-27 Last updated: 2025-03-25Bibliographically approved
Ekstedt, E., Wang, S., Székely, É., Gustafsson, J. & Skantze, G. (2023). Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis. In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023: . Paper presented at 24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland (pp. 5481-5485). International Speech Communication Association
Open this publication in new window or tab >>Automatic Evaluation of Turn-taking Cues in Conversational Speech Synthesis
Show others...
2023 (English)In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2023, International Speech Communication Association , 2023, p. 5481-5485Conference paper, Published paper (Refereed)
Abstract [en]

Turn-taking is a fundamental aspect of human communication where speakers convey their intention to either hold, or yield, their turn through prosodic cues. Using the recently proposed Voice Activity Projection model, we propose an automatic evaluation approach to measure these aspects for conversational speech synthesis. We investigate the ability of three commercial, and two open-source, Text-To-Speech (TTS) systems ability to generate turn-taking cues over simulated turns. By varying the stimuli, or controlling the prosody, we analyze the models performances. We show that while commercial TTS largely provide appropriate cues, they often produce ambiguous signals, and that further improvements are possible. TTS, trained on read or spontaneous speech, produce strong turn-hold but weak turn-yield cues. We argue that this approach, that focus on functional aspects of interaction, provides a useful addition to other important speech metrics, such as intelligibility and naturalness.

Place, publisher, year, edition, pages
International Speech Communication Association, 2023
Keywords
human-computer interaction, text-to-speech, turn-taking
National Category
Natural Language Processing Computer Sciences General Language Studies and Linguistics
Identifiers
urn:nbn:se:kth:diva-337873 (URN)10.21437/Interspeech.2023-2064 (DOI)001186650305133 ()2-s2.0-85171597862 (Scopus ID)
Conference
24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland
Projects
tmh_turntaking
Note

QC 20241024

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-02-01Bibliographically approved
Ekstedt, E. (2023). Predictive Modeling of Turn-Taking in Spoken Dialogue: Computational Approaches for the Analysis of Turn-Taking in Humans and Spoken Dialogue Systems. (Doctoral dissertation). Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Predictive Modeling of Turn-Taking in Spoken Dialogue: Computational Approaches for the Analysis of Turn-Taking in Humans and Spoken Dialogue Systems
2023 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

Turn-taking in spoken dialogue represents a complex cooperative process wherein participants use verbal and non-verbal cues to coordinate who speaks and who listens, to anticipate speaker transitions, and to produce backchannels (e.g., “mhm”, “uh-huh”) at the right places. This thesis frames turntaking as the modeling of voice activity dynamics of dialogue interlocutors, with a focus on predictive modeling of these dynamics using both text- and audio-based deep learning models. Crucially, the models operate incrementally, estimating the activity dynamics across all potential dialogue states and interlocutors throughout a conversation. The aim is for these models is to increase the responsiveness of Spoken Dialogue Systems (SDS) while minimizing interruption. However, a considerable focus is also put on the analytical capabilities of these models to serve as data-driven, model-based tools for analyzing human conversational patterns in general.

This thesis focuses on the development and analysis of two distinct models of turn-taking: TurnGPT, operating in the verbal domain, and the Voice Activity Projection (VAP) model in the acoustic domain. Trained with general prediction objectives, these models offer versatility beyond turn-taking, enabling novel analyses of spoken dialogue. Utilizing attention and gradientbased techniques, this thesis sheds light on the crucial role of context in estimating speaker transitions within the verbal domain. The potential of incorporating TurnGPT into SDSs – employing a sampling-based strategy to predict upcoming speaker transitions from incomplete text, namely words yet to be transcribed by the ASR – is investigated to enhance system responsiveness. The VAP model, which predicts the joint voice activity of both dialogue interlocutors, is introduced and adapted to handle stereo channel audio. The model’s prosodic sensitivity is examined both in targeted utterances and in extended spoken dialogues. This analysis reveals that while intonation is crucial for distinguishing syntactically ambiguous events, it plays a less important role in general turn-taking within long-form dialogues. The VAP model’s analytical capabilities are also highlighted, to assess the impact of filled pauses and serve as an evaluation tool for conversational TTS, determining their ability to produce prosodically relevant turn-taking cues.

Abstract [sv]

Turtagning inom talad dialog involverar en komplex sammarbetsprocess där talarna använder sig av prosodiska och semantiska signaler för att koordinera vem som ska tala och vem som lyssnar, förutse turbyten och producera återkopplingssignaler (t.ex. “mhm”, “uh-huh”, m.m.) på rätt ställen. Denna avhandling modellerar turtagning i termer av röstaktivitetsdynamik hos talarna, med fokus på prediktiv modellering av denna dynamik med både textoch ljudbaserade maskininlärningsmodeller. Dessa modeller arbetar inkrementellt och uppskattar aktivitetsdynamiken över alla potentiella dialogtillstånd och samtalsparter under en konversation. Målet är att dessa modeller ska öka responsiviteten hos talbaserade dialogsystem samtidigt som de minimerar hur ofta systemet avbryter användaren. Utöver dessa tillämpningar läggs även ett betydande fokus på att utforska hur dessa modeller kan användas som datadrivna, modellbaserade verktyg för att analysera generella mänskliga konversationsmönster.

Denna avhandling fokuserar på implementering och analys av två distinkta modeller för turtaking: TurnGPT, som processar verbal information (text), och Voice Activity Projection (VAP), som processar aukustisk information (tal). Modellerna är tränade genom att optimera generella prediktionsmål, vilket möjliggör användningsområden bortom enbart turtagning, t.ex. för nyskapande analyser av talad dialog. Genom att använda uppmärksamhets- och gradientbaserade tekniker belyser denna avhandling den avgörande rollen av kontext när det gäller att klassificera talarövergångar inom den verbala domänen. Möjligheten att integrera TurnGPT i dialogsystem – genom att använda en samplingbaserad strategi för att förutspå kommande turbyten från ofullständig text, d.v.s. ord som ännu inte transkriberats av taligenkänningen – undersöks för att förbättra systemets responsivitet. VAP-modellen, som modellerar båda dialogdeltagarnas gemensamma röstaktivitet, introduceras och anpassas för att hantera ljud i stereo. Modellens prosodiska känslighet undersöks både i specifikt valda yttranden och inom längre dialoger. Denna analys visar att medan intonation är avgörande för att särskilja syntaktiskt tvetydiga yttranden, spelar den en mindre viktig roll i generell turtagning inom längre dialoger. VAP-modellens analytiska kapacitet lyfts fram för att bedöma effekten av fyllda pauser och som utvärderingsverktyg för konversationell talsyntes, detta för att bestämma deras förmåga att producera prosodiskt relevanta turtagninssignaler.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2023. p. ix, 183
Series
TRITA-EECS-AVL ; 2023:81
Keywords
turn-taking, spoken dialog system, human computer interaction, Turtagning, talad dialog, människa-data interaktion
National Category
Natural Language Processing Computer Sciences
Research subject
Computer Science; Human-computer Interaction; Speech and Music Communication
Identifiers
urn:nbn:se:kth:diva-339630 (URN)978-91-8040-756-4 (ISBN)
Public defence
2023-12-08, F3, Lindstedtsvägen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
Riksbankens Jubileumsfond, P20-0484Swedish Research Council, 2020-03812
Note

QC 20231115

Available from: 2023-11-15 Created: 2023-11-15 Last updated: 2025-02-01Bibliographically approved
Jiang, B., Ekstedt, E. & Skantze, G. (2023). Response-conditioned Turn-taking Prediction. In: Findings of the Association for Computational Linguistics, ACL 2023: . Paper presented at 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, July 9-14, 2023, Toronto, Canada (pp. 12241-12248). Association for Computational Linguistics (ACL)
Open this publication in new window or tab >>Response-conditioned Turn-taking Prediction
2023 (English)In: Findings of the Association for Computational Linguistics, ACL 2023, Association for Computational Linguistics (ACL) , 2023, p. 12241-12248Conference paper, Published paper (Refereed)
Abstract [en]

Previous approaches to turn-taking and response generation in conversational systems have treated it as a two-stage process: First, the end of a turn is detected (based on conversation history), then the system generates an appropriate response. Humans, however, do not take the turn just because it is likely, but also consider whether what they want to say fits the position. In this paper, we present a model (an extension of TurnGPT) that conditions the end-of-turn prediction on both conversation history and what the next speaker wants to say. We find that our model consistently outperforms the baseline model on a variety of metrics. The improvement is most prominent in two scenarios where turn predictions can be ambiguous solely from the conversation history: 1) when the current utterance contains a statement followed by a question; 2) when the end of the current utterance semantically matches the response. Treating the turn-prediction and response-ranking as a one-stage process, our findings suggest that our model can be used as an incremental response ranker, which can be applied in various settings.

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2023
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-350243 (URN)10.18653/v1/2023.findings-acl.776 (DOI)2-s2.0-85175451617 (Scopus ID)
Conference
61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, July 9-14, 2023, Toronto, Canada
Projects
tmh_turntaking
Note

Part of ISBN 9781959429623

QC 20241028

Available from: 2024-07-11 Created: 2024-07-11 Last updated: 2025-02-07Bibliographically approved
Ekstedt, E. & Skantze, G. (2023). Show & Tell: Voice Activity Projection and Turn-taking. In: Interspeech 2023: . Paper presented at 24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland (pp. 2020-2021). International Speech Communication Association
Open this publication in new window or tab >>Show & Tell: Voice Activity Projection and Turn-taking
2023 (English)In: Interspeech 2023, International Speech Communication Association , 2023, p. 2020-2021Conference paper, Published paper (Refereed)
Abstract [en]

We present Voice Activity Projection (VAP), a model trained on spontaneous spoken dialog with the objective to incrementally predict future voice activity. Similar to a language model, it is trained through self-supervised learning and outputs a probability distribution over discrete states that corresponds to the joint future voice activity of the dialog interlocutors. The model is well-defined over overlapping speech regions, resilient towards microphone “bleed-over” and considers the speech of both speakers (e.g., a user and an agent) to provide the most likely next speaker. VAP is a general turn-taking model which can serve as the base for turn-taking decisions in spoken dialog systems, an automatic tool useful for linguistics and conversational analysis, an automatic evaluation metric for conversational text-to-speech models, and possibly many other tasks related to spoken dialog interaction.

Place, publisher, year, edition, pages
International Speech Communication Association, 2023
Keywords
spoken dialog, text-to-speech, turn-taking
National Category
Natural Language Processing Computer Sciences
Identifiers
urn:nbn:se:kth:diva-337875 (URN)001186650302038 ()2-s2.0-85171575920 (Scopus ID)
Conference
24th International Speech Communication Association, Interspeech 2023, August 20-24, 2023, Dublin, Ireland
Note

QC 20241014

Available from: 2023-10-10 Created: 2023-10-10 Last updated: 2025-02-01Bibliographically approved
Jiang, B., Ekstedt, E. & Skantze, G. (2023). What makes a good pause? Investigating the turn-holding effects of fillers. In: Proceedings 20th International Congress of Phonetic Sciences (ICPhS): . Paper presented at 20th International Congress of Phonetic Sciences (ICPhS). August 7-11 2023, Prague, Czech Republic (pp. 3512-3516). Prague: International Phonetic Association, Article ID 828.
Open this publication in new window or tab >>What makes a good pause? Investigating the turn-holding effects of fillers
2023 (English)In: Proceedings 20th International Congress of Phonetic Sciences (ICPhS), Prague: International Phonetic Association , 2023, p. 3512-3516, article id 828Conference paper, Published paper (Refereed)
Abstract [en]

Filled pauses (or fillers), such as uh and um, are frequent in spontaneous speech and can serve as a turn-holding cue for the listener, indicating that the current speaker is not done yet. In this paper, we use the recently proposed Voice Activity Projection (VAP) model, which is a deep learning model trained to predict the dynamics of conversation, to analyse the effects of filled pauses on the expected turn-hold probability. The results show that, while filled pauses do indeed have a turn-holding effect, it is perhaps not as strong as could be expected, probably due to the redundancy of other cues. We also find that the prosodic properties and position of the filler has a significant effect on the turn-hold probability. However, contrary to what has been suggested in previous work, there is no difference between uh and um in this regard.

Place, publisher, year, edition, pages
Prague: International Phonetic Association, 2023
Series
ICPhS Proceedings, ISSN 2412-0669
Keywords
Hesitation, fillers, turn-taking, spoken dialog, computational modelling
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-341383 (URN)
Conference
20th International Congress of Phonetic Sciences (ICPhS). August 7-11 2023, Prague, Czech Republic
Projects
tmh_turntaking
Note

Part of ISBN 978-80-908 114-2-3

QC 20241028

Available from: 2023-12-19 Created: 2023-12-19 Last updated: 2025-02-07Bibliographically approved
Ekstedt, E. & Skantze, G. (2022). How Much Does Prosody Help Turn-taking?Investigations using Voice Activity Projection Models. In: Association for Computational Linguistics (Ed.), Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue: . Paper presented at the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Edinburgh, UK. Association for Computational Linguistics. (pp. 541-551). Edinburgh UK: Association for Computational Linguistics, 23, Article ID 2022.sigdial-1.51.
Open this publication in new window or tab >>How Much Does Prosody Help Turn-taking?Investigations using Voice Activity Projection Models
2022 (English)In: Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue / [ed] Association for Computational Linguistics, Edinburgh UK: Association for Computational Linguistics, 2022, Vol. 23, p. 541-551, article id 2022.sigdial-1.51Conference paper, Published paper (Refereed)
Abstract [en]

Turn-taking is a fundamental aspect of human communication and can be described as the ability to take turns, project upcoming turn shifts, and supply backchannels at appropriate locations throughout a conversation. In this work, we investigate the role of prosody in turn-taking using the recently proposed Voice Activity Projection model, which incrementally models the upcoming speech activity of the interlocutors in a self-supervised manner, without relying on explicit annotation of turn-taking events, or the explicit modeling of prosodic features. Through manipulation of the speech signal, we investigate how these models implicitly utilize prosodic information. We show that these systems learn to utilize various prosodic aspects of speech both on aggregate quantitative metrics of long-form conversations and on single utterances specifically designed to depend on prosody.

Place, publisher, year, edition, pages
Edinburgh UK: Association for Computational Linguistics, 2022
Keywords
turn-taking, spoken dialog, voice activity projection, prosody
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-322532 (URN)10.18653/v1/2022.sigdial-1.51 (DOI)2-s2.0-85161066300 (Scopus ID)
Conference
the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Edinburgh, UK. Association for Computational Linguistics.
Projects
tmh_turntaking
Funder
Riksbankens Jubileumsfond, P20-0484Swedish Research Council, 2020-03812
Note

Won best paper award

QC 20221221

Part of ISBN 978-195591766-7

Available from: 2022-12-19 Created: 2022-12-19 Last updated: 2025-02-07Bibliographically approved
Ekstedt, E. & Skantze, G. (2022). Voice Activity Projection: Self-supervised Learning of Turn-taking Events. In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2022: . Paper presented at 23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, Incheon, 18 September 2022 through 22 September (pp. 5190-5194). International Speech Communication Association, Article ID 10955.
Open this publication in new window or tab >>Voice Activity Projection: Self-supervised Learning of Turn-taking Events
2022 (English)In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, International Speech Communication Association, 2022, p. 5190-5194, article id 10955Conference paper, Published paper (Refereed)
Abstract [en]

The modeling of turn-taking in dialog can be viewed as the modeling of the dynamics of voice activity of the interlocutors. We extend prior work and define the predictive task of Voice Activity Projection, a general, self-supervised objective, as a way to train turn-taking models without the need of labeled data. We highlight a theoretical weakness with prior approaches, arguing for the need of modeling the dependency of voice activity events in the projection window. We propose four zero-shot tasks, related to the prediction of upcoming turn-shifts and backchannels, and show that the proposed model outperforms prior work.

Place, publisher, year, edition, pages
International Speech Communication Association, 2022
Keywords
turn-taking, spoken dialog, voice activity projection, transformer
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-322531 (URN)10.21437/Interspeech.2022-10955 (DOI)000900724505074 ()2-s2.0-85138623131 (Scopus ID)
Conference
23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022, Incheon, 18 September 2022 through 22 September
Projects
tmh_turntaking
Funder
Riksbankens Jubileumsfond, P20-0484Swedish Research Council, 2020-03812
Note

QC 20241024

Available from: 2022-12-19 Created: 2022-12-19 Last updated: 2025-02-07Bibliographically approved
Ekstedt, E. & Skantze, G. (2021). Projection of Turn Completion in Incremental Spoken Dialogue Systems. In: SIGDIAL 2021: SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, Virtual, Singapore 29 July 2021 through 31 July 2021. Paper presented at 22nd Annual Meeting of the Special-Interest-Group-on-Discourse-and-Dialogue (SIGDIAL), JUL 29-31, 2021, Singapore, SINGAPORE (pp. 431-437). ASSOC COMPUTATIONAL LINGUISTICS
Open this publication in new window or tab >>Projection of Turn Completion in Incremental Spoken Dialogue Systems
2021 (English)In: SIGDIAL 2021: SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference, Virtual, Singapore 29 July 2021 through 31 July 2021, ASSOC COMPUTATIONAL LINGUISTICS , 2021, p. 431-437Conference paper, Published paper (Refereed)
Abstract [en]

The ability to take turns in a fluent way (i.e., without long response delays or frequent interruptions) is a fundamental aspect of any spoken dialog system. However, practical speech recognition services typically induce a long response delay, as it takes time before the processing of the user's utterance is complete. There is a considerable amount of research indicating that humans achieve fast response times by projecting what the interlocutor will say and estimating upcoming turn completions. In this work, we implement this mechanism in an incremental spoken dialog system, by using a language model that generates possible futures to project upcoming completion points. In theory, this could make the system more responsive, while still having access to semantic information not yet processed by the speech recognizer. We conduct a small study which indicates that this is a viable approach for practical dialog systems, and that this is a promising direction for future research.

Place, publisher, year, edition, pages
ASSOC COMPUTATIONAL LINGUISTICS, 2021
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-304761 (URN)10.18653/v1/2021.sigdial-1.45 (DOI)000707001800045 ()2-s2.0-85136067428 (Scopus ID)
Conference
22nd Annual Meeting of the Special-Interest-Group-on-Discourse-and-Dialogue (SIGDIAL), JUL 29-31, 2021, Singapore, SINGAPORE
Projects
tmh_turntaking
Note

Part of proceedings: ISBN 978-1-954085-81-7, QC 20230117

Available from: 2021-11-12 Created: 2021-11-12 Last updated: 2025-05-27Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3513-4132

Search in DiVA

Show all publications