Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Modality Convergence in a Multimodal Dialogue System
KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.ORCID-id: 0000-0003-2600-7668
KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.ORCID-id: 0000-0002-0397-6442
2000 (Engelska)Ingår i: Proceedings of Götalog, 2000, s. 29-34Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

When designing multimodal dialogue systems allowing speech as well as graphical operations, it is important to understand not only how people make use of the different modalities in their utterances, but also how the system might influence a user’s choice of modality by its own behavior. This paper describes an experiment in which subjects interacted with two versions of a simulated multimodal dialogue system. One version used predominantly graphical means when referring to specific objects; the other used predominantly verbal referential expressions. The purpose of the study was to find out what effect, if any, the system’s referential strategy had on the user’s behavior. The results provided limited support for the hypothesis that the system can influence users to adopt another modality for the purpose of referring

Ort, förlag, år, upplaga, sidor
2000. s. 29-34
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
URN: urn:nbn:se:kth:diva-13328OAI: oai:DiVA.org:kth-13328DiVA, id: diva2:323663
Konferens
Fourth Workshop on the Semantics and Pragmatics of Dialogue
Anmärkning

QC 20100611

Tillgänglig från: 2010-06-11 Skapad: 2010-06-11 Senast uppdaterad: 2018-05-21Bibliografiskt granskad
Ingår i avhandling
1. Developing Multimodal Spoken Dialogue Systems: Empirical Studies of Spoken Human–Computer Interaction
Öppna denna publikation i ny flik eller fönster >>Developing Multimodal Spoken Dialogue Systems: Empirical Studies of Spoken Human–Computer Interaction
2002 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

This thesis presents work done during the last ten years on developing five multimodal spoken dialogue systems, and the empirical user studies that have been conducted with them. The dialogue systems have been multimodal, giving information both verbally with animated talking characters and graphically on maps and in text tables. To be able to study a wider rage of user behaviour each new system has been in a new domain and with a new set of interactional abilities. The five system presented in this thesis are: The Waxholm system where users could ask about the boat traffic in the Stockholm archipelago; the Gulan system where people could retrieve information from the Yellow pages of Stockholm; the August system which was a publicly available system where people could get information about the author Strindberg, KTH and Stockholm; the AdAptsystem that allowed users to browse apartments for sale in Stockholm and the Pixie system where users could help ananimated agent to fix things in a visionary apartment publicly available at the Telecom museum in Stockholm. Some of the dialogue systems have been used in controlled experiments in laboratory environments, while others have been placed inpublic environments where members of the general public have interacted with them. All spoken human-computer interactions have been transcribed and analyzed to increase our understanding of how people interact verbally with computers, and to obtain knowledge on how spoken dialogue systems canutilize the regularities found in these interactions. This thesis summarizes the experiences from building these five dialogue systems and presents some of the findings from the analyses of the collected dialogue corpora.

Ort, förlag, år, upplaga, sidor
Stockholm: KTH, 2002. s. x, 96
Serie
Trita-TMH ; 2002:8
Nyckelord
Spoken dialogue system, multimodal, speech, GUI, animated agents, embodied conversational characters, talking heads, empirical user studies, speech corpora, system evaluation, system development, Wizard of Oz simulations, system architecture, linguis
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:kth:diva-3460 (URN)
Disputation
2002-12-20, 00:00
Anmärkning
QC 20100611Tillgänglig från: 2002-12-11 Skapad: 2002-12-11 Senast uppdaterad: 2010-06-11Bibliografiskt granskad

Open Access i DiVA

fulltext(974 kB)27 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 974 kBChecksumma SHA-512
7e8141a603dbb60fcd7d113ba0e3823b893deefebae18ecf30155ac816cd10125046b2a52de6dc156bc5ae41de8c37730f3ffa71820ebd9ec3a9f57f3fb22446
Typ fulltextMimetyp application/pdf

Personposter BETA

Boye, JohanGustafson, Joakim

Sök vidare i DiVA

Av författaren/redaktören
Bell, LindaBoye, JohanGustafson, Joakim
Av organisationen
Tal, musik och hörsel
Teknik och teknologier

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 27 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 251 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf