kth.sePublications
Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Larson Holmgren, David
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Särnell, Adam
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Facilitating reflection on climate change using interactive sonification2022In: Proceedings of ISon 2022, 7th Interactive Sonification Workshop, BSCC, University of Bremen, Germany, September 22–23, 2022, 2022Conference paper (Refereed)
    Abstract [en]

    This study explores the possibility of using musical soundscapes to facilitate reflection on the impacts of climate change. By sonifying historic and future climate data, an interactive timeline was created where the user can explore a soundscape changing in time. A prototype was developed and tested in a user study with 15 participants. Results indicate that the prototype successfully elicits the emotions that it was designed to communicate and that it does influence the participants’ reflections. However, it remains uncertain how much the prototype actually helped them while reflecting.

  • 2.
    Myresten, Emil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Larson Holmgren, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonification of Twitter Hashtags Using Earcons Based on the Sound of Vowels2021In: Proceedigns of the 2nd Nordic Sound and Music Computing Conference, Zenodo , 2021Conference paper (Refereed)
    Abstract [en]

    The amount of notifications we receive from our digital devices is higher today than ever, often causing distress in users constantly having to move their devices into the center of attention, digesting the information received visually. In this study we have tested the use of short sound messages, earcons, in order to notify users about the arrival of different Twitter messages. The idea was to keep the arrival of new messages in the periphery of attention while simultaneously monitoring other activities. Using Twitter hashtags as the underlying data, a sonic abstraction was made by mapping the vowels present in a hashtag to a melody, and by enhancing the formant frequencies of these vowels. This gives rise to the question of whether enhancing vowel presence through formant synthesis aids the implicit learning of earcons, with Twitter hashtags as the underlying text data. A methodology is described, with a mapping of several phonetic vowels containing the fundamental frequency, f0, the first two formant frequencies, f1 and f2, for each vowel, as well as a rhythmic mapping based on the hashtags’ syllables as well as where the emphasis lies. An application was developed to receive tweets in real time using real data from Twitter, playing the earcons associated with hashtags of actual Twitter messages. Results of a user test show that participants were able to recognize the earcons related to each of the hashtags used in the experiment to a certain degree.

1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf