The amount of notifications we receive from our digital devices is higher today than ever, often causing distress in users constantly having to move their devices into the center of attention, digesting the information received visually. In this study we have tested the use of short sound messages, earcons, in order to notify users about the arrival of different Twitter messages. The idea was to keep the arrival of new messages in the periphery of attention while simultaneously monitoring other activities. Using Twitter hashtags as the underlying data, a sonic abstraction was made by mapping the vowels present in a hashtag to a melody, and by enhancing the formant frequencies of these vowels. This gives rise to the question of whether enhancing vowel presence through formant synthesis aids the implicit learning of earcons, with Twitter hashtags as the underlying text data. A methodology is described, with a mapping of several phonetic vowels containing the fundamental frequency, f0, the first two formant frequencies, f1 and f2, for each vowel, as well as a rhythmic mapping based on the hashtags’ syllables as well as where the emphasis lies. An application was developed to receive tweets in real time using real data from Twitter, playing the earcons associated with hashtags of actual Twitter messages. Results of a user test show that participants were able to recognize the earcons related to each of the hashtags used in the experiment to a certain degree.
QC 20221114