Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The MonAMI Reminder: a spoken dialogue system for face-to-face interaction
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0001-9327-9482
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0002-0397-6442
Show others and affiliations
2009 (English)In: Proceedings of the 10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009, Brighton, U.K, 2009, 300-303 p.Conference paper, Published paper (Refereed)
Abstract [en]

We describe the MonAMI Reminder, a multimodal spoken dialogue system which can assist elderly and disabled people in organising and initiating their daily activities. Based on deep interviews with potential users, we have designed a calendar and reminder application which uses an innovative mix of an embodied conversational agent, digital pen and paper, and the web to meet the needs of those users as well as the current constraints of speech technology. We also explore the use of head pose tracking for interaction and attention control in human-computer face-to-face interaction.

Place, publisher, year, edition, pages
Brighton, U.K, 2009. 300-303 p.
Keyword [en]
Attention control, Daily activity, Digital pen and paper, Disabled people, Embodied conversational agent, Face-to-face interaction, Head-pose tracking, Human-computer, Multi-modal, Potential users, Speech technology, Spoken dialogue system
National Category
Computer Science Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-52082ISI: 000276842800073Scopus ID: 2-s2.0-70450202579OAI: oai:DiVA.org:kth-52082DiVA: diva2:465376
Conference
10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009, Brighton, 6 September 2009 through 10 September 2009
Note
tmh_import_11_12_14. QC 20120207Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2012-02-07Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Beskow, JonasEdlund, JensGustafson, JoakimSkantze, GabrielTobiasson, Helena

Search in DiVA

By author/editor
Beskow, JonasEdlund, JensGranström, BjörnGustafson, JoakimSkantze, GabrielTobiasson, Helena
By organisation
Speech Communication and TechnologyHuman - Computer Interaction, MDI (closed 20111231)
Computer ScienceLanguage Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 153 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf