kth.sePublications
Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Axelsson, Agnes
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Buschmeier, Hendrik
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Modeling Feedback in Interaction With Conversational Agents—A Review2022In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 4, article id 744574Article, review/survey (Refereed)
    Abstract [en]

    Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.

    Download full text (pdf)
    fulltext
  • 2.
    Axelsson, Agnes
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Multimodal User Feedback During Adaptive Robot-Human Presentations2022In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 3Article in journal (Refereed)
    Abstract [en]

    Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.

    Download full text (pdf)
    fulltext
  • 3.
    Baum, Kevin
    et al.
    Deutsch Forschungszentrum Julich Kunstl Intelligen, Kaiserslautern, Germany..
    Bryson, Joanna
    Hertie Sch, Berlin, Germany..
    Dignum, Frank
    Umeå Univ, Umeå, Sweden..
    Dignum, Virginia
    Umeå Univ, Umeå, Sweden..
    Grobelnik, Marko
    OECD, Paris, France..
    Hoos, Holger
    Rhein Westfal TH Aachen, Aachen, Germany..
    Irgens, Morten
    Oslo Metropolitan Univ, CLAIRE AIorg, Oslo, Norway..
    Lukowicz, Paul
    Deutsch Forschungszentrum Julich Kunstl Intelligen, Kaiserslautern, Germany..
    Muller, Catelijne
    ALLAI, Amsterdam, Netherlands..
    Rossi, Francesca
    IBM Corp, Yorktown Hts, NY USA..
    Shawe-Taylor, John
    Int Res Inst AI, IRCAI, Ljubljana, Slovenia..
    Theodorou, Andreas
    VerAI, Umeå, Sweden..
    Vinuesa, Ricardo
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics.
    From fear to action: AI governance and opportunities for all2023In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, article id 1210421Article in journal (Other academic)
  • 4.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Rech & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Son STMS, UMR9912, Paris, France..
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Agres, Kat
    Natl Univ Singapore, Ctr Mus & Hlth, Yong Siew Toh Conservatory Mus, Singapore, Singapore..
    Lucas, Alex
    Queens Univ Belfast, Son Arts Res Ctr, Belfast, North Ireland..
    Editorial: New advances and novel applications of music technologies for health, well-being, and inclusion2024In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 6, article id 1358454Article in journal (Refereed)
  • 5.
    Jonell, Patrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Moell, Birger
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Håkansson, Krister
    Karolinska Inst, Dept Neurobiol Care Sci & Soc, Stockholm, Sweden.;Karolinska Univ Hosp, Stockholm, Sweden..
    Henter, Gustav Eje
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Kucherenko, Taras
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Mikheeva, Olga
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Hagman, Göran
    Karolinska Inst, Dept Neurobiol Care Sci & Soc, Stockholm, Sweden.;Karolinska Univ Hosp, Stockholm, Sweden..
    Holleman, Jasper
    Karolinska Inst, Dept Neurobiol Care Sci & Soc, Stockholm, Sweden.;Karolinska Univ Hosp, Stockholm, Sweden..
    Kivipelto, Miia
    Karolinska Inst, Dept Neurobiol Care Sci & Soc, Stockholm, Sweden.;Karolinska Univ Hosp, Stockholm, Sweden..
    Kjellström, Hedvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Multimodal Capture of Patient Behaviour for Improved Detection of Early Dementia: Clinical Feasibility and Preliminary Results2021In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 3, article id 642633Article in journal (Refereed)
    Abstract [en]

    Non-invasive automatic screening for Alzheimer's disease has the potential to improve diagnostic accuracy while lowering healthcare costs. Previous research has shown that patterns in speech, language, gaze, and drawing can help detect early signs of cognitive decline. In this paper, we describe a highly multimodal system for unobtrusively capturing data during real clinical interviews conducted as part of cognitive assessments for Alzheimer's disease. The system uses nine different sensor devices (smartphones, a tablet, an eye tracker, a microphone array, and a wristband) to record interaction data during a specialist's first clinical interview with a patient, and is currently in use at Karolinska University Hospital in Stockholm, Sweden. Furthermore, complementary information in the form of brain imaging, psychological tests, speech therapist assessment, and clinical meta-data is also available for each patient. We detail our data-collection and analysis procedure and present preliminary findings that relate measures extracted from the multimodal recordings to clinical assessments and established biomarkers, based on data from 25 patients gathered thus far. Our findings demonstrate feasibility for our proposed methodology and indicate that the collected data can be used to improve clinical assessments of early dementia.

  • 6.
    Lindetorp, Hans
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal Coll Mus, Dept Mus & Media Prod, Stockholm, Sweden.
    Svahn, Maria
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Hölling, Josefine
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. Inst Res & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Sound STMS, Paris, France..
    Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments2023In: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, article id 1165442Article in journal (Refereed)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf