kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Crowd-Sourced Design of Artificial Attentive Listeners
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-3687-6189
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8874-6629
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0001-5620-377X
Show others and affiliations
2017 (English)In: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, International Speech Communication Association, 2017, Vol. 2017, p. 854-858Conference paper, Published paper (Refereed)
Abstract [en]

Feedback generation is an important component of humanhuman communication. Humans can choose to signal support, understanding, agreement or also sceptiscism by means of feedback tokens. Many studies have focused on the timing of feedback behaviours. In the current study, however, we keep the timing constant and instead focus on the lexical form and prosody of feedback tokens as well as their sequential patterns. For this we crowdsourced participant's feedback behaviour in identical interactional contexts in order to model a virtual agent that is able to provide feedback as an attentive/supportive as well as attentive/sceptical listener. The resulting models were realised in a robot which was evaluated by third-party observers.

Place, publisher, year, edition, pages
International Speech Communication Association, 2017. Vol. 2017, p. 854-858
Series
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, ISSN 2308-457X ; 2017
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-268357DOI: 10.21437/Interspeech.2017-926ISI: 000457505000181Scopus ID: 2-s2.0-85028998444ISBN: 978-1-5108-4876-4 (print)OAI: oai:DiVA.org:kth-268357DiVA, id: diva2:1394170
Conference
18th Annual Conference of the International Speech Communication Association, INTERSPEECH 2017; Stockholm; Sweden; 20 August 2017 through 24 August 2017
Note

QC 20200703

Available from: 2020-02-18 Created: 2020-02-18 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Oertel, CatharineJonell, PatrikKontogiorgos, DimosthenisMendelson, JosephBeskow, JonasGustafson, Joakim

Search in DiVA

By author/editor
Oertel, CatharineJonell, PatrikKontogiorgos, DimosthenisMendelson, JosephBeskow, JonasGustafson, Joakim
By organisation
Speech, Music and Hearing, TMH
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 72 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf