Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Constraint Manipulation and Visualization in a Multimodal Dialogue System
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0002-0397-6442
KTH, Superseded Departments, Speech, Music and Hearing.
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0003-2600-7668
KTH, Superseded Departments, Speech, Music and Hearing.ORCID iD: 0000-0001-9327-9482
Show others and affiliations
2002 (English)In: Proceedings of MultiModal Dialogue in Mobile Environments, 2002Conference paper, Published paper (Other academic)
Place, publisher, year, edition, pages
2002.
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-13333OAI: oai:DiVA.org:kth-13333DiVA: diva2:323750
Conference
MultiModal Dialogue in Mobile Environments
Note
QC 20100611Available from: 2010-06-11 Created: 2010-06-11 Last updated: 2010-06-11Bibliographically approved
In thesis
1. Developing Multimodal Spoken Dialogue Systems: Empirical Studies of Spoken Human–Computer Interaction
Open this publication in new window or tab >>Developing Multimodal Spoken Dialogue Systems: Empirical Studies of Spoken Human–Computer Interaction
2002 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This thesis presents work done during the last ten years on developing five multimodal spoken dialogue systems, and the empirical user studies that have been conducted with them. The dialogue systems have been multimodal, giving information both verbally with animated talking characters and graphically on maps and in text tables. To be able to study a wider rage of user behaviour each new system has been in a new domain and with a new set of interactional abilities. The five system presented in this thesis are: The Waxholm system where users could ask about the boat traffic in the Stockholm archipelago; the Gulan system where people could retrieve information from the Yellow pages of Stockholm; the August system which was a publicly available system where people could get information about the author Strindberg, KTH and Stockholm; the AdAptsystem that allowed users to browse apartments for sale in Stockholm and the Pixie system where users could help ananimated agent to fix things in a visionary apartment publicly available at the Telecom museum in Stockholm. Some of the dialogue systems have been used in controlled experiments in laboratory environments, while others have been placed inpublic environments where members of the general public have interacted with them. All spoken human-computer interactions have been transcribed and analyzed to increase our understanding of how people interact verbally with computers, and to obtain knowledge on how spoken dialogue systems canutilize the regularities found in these interactions. This thesis summarizes the experiences from building these five dialogue systems and presents some of the findings from the analyses of the collected dialogue corpora.

Place, publisher, year, edition, pages
Stockholm: KTH, 2002. x, 96 p.
Series
Trita-TMH, 2002:8
Keyword
Spoken dialogue system, multimodal, speech, GUI, animated agents, embodied conversational characters, talking heads, empirical user studies, speech corpora, system evaluation, system development, Wizard of Oz simulations, system architecture, linguis
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-3460 (URN)
Public defence
2002-12-20, 00:00
Note
QC 20100611Available from: 2002-12-11 Created: 2002-12-11 Last updated: 2010-06-11Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Gustafson, JoakimJohan, BoyeEdlund, Jens

Search in DiVA

By author/editor
Gustafson, JoakimBell, LindaJohan, BoyeEdlund, Jens
By organisation
Speech, Music and Hearing
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 33 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf