Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Edlund, Jens
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Beskow, Jonas
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    MushyPeek: A Framework for Online Investigation of Audiovisual Dialogue Phenomena2009In: Language and Speech, ISSN 0023-8309, E-ISSN 1756-6053, Vol. 52, p. 351-367Article in journal (Refereed)
    Abstract [en]

    Evaluation of methods and techniques for conversational and multimodal spoken dialogue systems is complex, as is gathering data for the modeling and tuning of such techniques. This article describes MushyPeek, all experiment framework that allows us to manipulate the audiovisual behavior of interlocutors in a setting similar to face-to-face human-human dialogue. The setup connects two subjects to each other over a Voice over Internet Protocol (VoIP) telephone connection and simultaneously provides each of them with an avatar representing the other. We present a first experiment which inaugurates, exemplifies, and validates the framework. The experiment corroborates earlier findings on the use of gaze and head pose gestures in turn-taking.

  • 2.
    Zellers, Margaret
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. University of Stuttgart, Germany.
    Prosodic Variation and Segmental Reduction and Their Roles in Cuing Turn Transition in Swedish2017In: Language and Speech, ISSN 0023-8309, E-ISSN 1756-6053, Vol. 60, no 3, p. 454-478Article in journal (Refereed)
    Abstract [en]

    Prosody has often been identified alongside syntax as a cue to turn hold or turn transition in conversational interaction. However, evidence for which prosodic cues are most relevant, and how strong those cues are, has been somewhat scattered. The current study addresses prosodic cues to turn transition in Swedish. A perception study looking closely at turn changes and holds in cases where the syntax does not lead inevitably to a particular outcome shows that Swedish listeners are sensitive to duration variations, even in the very short space of the final unstressed syllable of a turn, and that they may use pitch cues to a lesser extent. An investigation of production data indicates that duration, and to some extent segmental reduction, demonstrate consistent variation in relation to the types of turn boundaries they accompany, while fundamental frequency and glottalization do not. Taken together, these data suggest that duration may be the primary cue to turn transition in Swedish conversation, rather than fundamental frequency, as some other studies have suggested.

  • 3.
    Zellers, Margaret
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Ogden, Richard
    Exploring Interactional Features with Prosodic Patterns2014In: Language and Speech, ISSN 0023-8309, E-ISSN 1756-6053, Vol. 57, no 3, p. 285-309Article in journal (Refereed)
    Abstract [en]

    This study adopts a multiple-methods approach to the investigation of prosody, drawing on insights from a quantitative methodology (experimental prosody research) as well as a qualitative one (conversation analysis). We use a k-means cluster analysis to investigate prosodic patterns in conversational sequences involving lexico-semantic contrastive structures. This combined methodology demonstrates that quantitative/statistical methods are a valuable tool for making relatively objective characterizations of acoustic features of speech, while qualitative methods are essential for interpreting the quantitative results. We find that in sequences that maintain global prosodic characteristics across contrastive structures, participants orient to interactional problems, such as determining who has the right to the floor, or avoiding disruption of an ongoing interaction. On the other hand, in sequences in which the global prosody is different across contrastive structures, participants do not generally appear to be orienting to such problems of alignment. Our findings expand the interpretation of "contrastive prosody" that is commonly used in experimental prosody approaches, while providing a way for conversation-analytic research to improve quantification and generalizability of findings.

1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf