Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
From acoustic cues to an expressive agent
KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.ORCID-id: 0000-0002-3086-0322
2006 (Engelska)Ingår i: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Gibet, S; Courty, N; Kamp, JF, 2006, Vol. 3881, s. 280-291Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This work proposes a new way for providing feedback to expressivity in music performance. Starting from studies on the expressivity of music performance we developed a system in which a visual feedback is given to the user using a graphical representation of a human face. The first part of the system, previously developed by researchers at KTH Stockholm and at the University of Uppsala, allows the real-time extraction and analysis of acoustic cues from the music performance. Cues extracted are: sound level, tempo, articulation, attack time, and spectrum energy. From these cues the system provides an high level interpretation of the emotional intention of the performer which will be classified into one basic emotion, such as happiness, sadness, or anger. We have implemented an interface between that system and the embodied conversational agent Greta, developed at the University of Rome "La Sapienza" and "University of Paris 8". We model expressivity of the facial animation of the agent with a set of six dimensions that characterize the manner of behavior execution. In this paper we will first describe a mapping between the acoustic cues and the expressivity dimensions of the face. Then we will show how to determine the facial expression corresponding to the emotional intention resulting from the acoustic analysis, using music sound level and tempo characteristics to control the intensity and the temporal variation of muscular activation.

Ort, förlag, år, upplaga, sidor
2006. Vol. 3881, s. 280-291
Serie
Lecture Notes In Artificial Intelligence, ISSN 0302-9743 ; 3881
Nationell ämneskategori
Datavetenskap (datalogi) Människa-datorinteraktion (interaktionsdesign) Datorseende och robotik (autonoma system) Psykologi Musik
Identifikatorer
URN: urn:nbn:se:kth:diva-42029DOI: 10.1007/11678816_31ISI: 000237042600031Scopus ID: 2-s2.0-33745549699ISBN: 3-540-32624-3 (tryckt)OAI: oai:DiVA.org:kth-42029DiVA, id: diva2:446085
Konferens
6th International Workshop on Gesture in Human-Computer Interaction and Simulation Location: Berder Isl, France, Date: MAY 18-20, 2005
Anmärkning

QC 20150708

Tillgänglig från: 2011-10-06 Skapad: 2011-10-05 Senast uppdaterad: 2018-01-12Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Bresin, Roberto

Sök vidare i DiVA

Av författaren/redaktören
Bresin, Roberto
Av organisationen
Tal, musik och hörsel, TMH
Datavetenskap (datalogi)Människa-datorinteraktion (interaktionsdesign)Datorseende och robotik (autonoma system)PsykologiMusik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 53 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf