Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Ljudskrapan/The Soundscraper: Sound exploration for children with complex needs, accommodating hearing aids and cochlear implants
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. (Sound and Music Computing)ORCID iD: 0000-0003-4259-484X
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0002-3086-0322
2011 (English)In: Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 / [ed] Zanolla, Serena; Avanzini, Federico; Canazza, Sergio; de Götzen, Amalia, Sound and Music Computing Network , 2011, 70-76 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes a system for accommodating active listening for persons with hearing aids or cochlear implants, with a special focus on children with complex needs, for instance at an early stage of cognitive development and with additional physical disabilities. The system is called Ljudskrapan (or the Soundscraper in English) and consists of a software part in Pure data and a hardware part using an Arduino microcontroller with a combination of sensors. For both the software and hardware development, one of the most important aspects was to always ensure that the system was flexible enough to cater for the very different conditions that are characteristic of the intended user group.The Soundscraper has been tested with 25 children with good results. An increased attention span was reported, as well as surprising and positive reactions from children where the caregivers were unsure whether they could hear at all. The sound generating models, the sensors and the parameter mapping were simple, but provided a controllable and complex enough sound environment even with limited interaction.

Place, publisher, year, edition, pages
Sound and Music Computing Network , 2011. 70-76 p.
Keyword [en]
Audition, Hardware, Active listening, Cognitive development, Generating models, Physical disability, Software and hardwares, Software parts, Sound environment, User groups
National Category
Computer Science Psychology Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-52202Scopus ID: 2-s2.0-84905165976ISBN: 9788897385035 (print)OAI: oai:DiVA.org:kth-52202DiVA: diva2:465500
Conference
8th Sound and Music Computing Conference, SMC 2011, Padova, Italy, 6 July 2011 through 9 July 2011
Projects
Ljudskrapan
Note

 QC 20120111. QC 20160115

Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2016-01-15Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Hansen, Kjetil FalkenbergBresin, Roberto

Search in DiVA

By author/editor
Hansen, Kjetil FalkenbergBresin, Roberto
By organisation
Media Technology and Interaction Design, MIDMusic Acoustics
Computer SciencePsychologyHuman Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 69 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf