Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Active Listening and Expressive Communication for Children with Hearing Loss Using Getatable Environments for Creativity
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID. KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics. (Sound and Music Computing)ORCID iD: 0000-0003-4259-484X
Riga Stradiņs University, Latvia.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Music Acoustics.ORCID iD: 0000-0002-3086-0322
2012 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 41, no 4, 365-375 p.Article in journal (Refereed) Published
Abstract [en]

This paper describes a system for accommodating active listening for persons with hearing aids or cochlear implants, with a special focus on children at an early stage of cognitive development and with additional physical disabilities. A system called the Soundscraper is proposed and consists of a software part in Pure data and a hardware part using an Arduino microcontroller with a combination of sensors. For both the software and hardware development it was important to always ensure that the system was flexible enough to cater for the very different conditions that are characteristic of the intended user group.The Soundscraper has been tested with 25 children with good results. An increased attention span was reported, as well as positively surprising reactions from children where the caregivers were unsure whether they could hear at all. The sound synthesis methods, the gesture sensors and the employed parameter mapping were all simple, but they provided a controllable and sufficiently complex sound environment even with limited interaction. A possible future outcome of the application is the adoption of long-time analysis of sound preferences as opposed to traditional audiological investigations.

Place, publisher, year, edition, pages
Stockholm: Taylor & Francis, 2012. Vol. 41, no 4, 365-375 p.
Keyword [en]
Cochlear Implant Users, Additional Disabilities, Music Perception, Recognition, Language, Future, Speech, Sound, Needs, Pitch
National Category
Computer Science Media and Communication Technology Human Computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-109389DOI: 10.1080/09298215.2012.739626ISI: 000312443400007Scopus ID: 2-s2.0-84871128750OAI: oai:DiVA.org:kth-109389DiVA: diva2:581720
Note

QC 20150623. QC 20160116

Available from: 2013-01-02 Created: 2013-01-02 Last updated: 2017-12-06Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Hansen, Kjetil FalkenbergBresin, Roberto

Search in DiVA

By author/editor
Hansen, Kjetil FalkenbergBresin, Roberto
By organisation
Media Technology and Interaction Design, MIDMusic Acoustics
In the same journal
Journal of New Music Research
Computer ScienceMedia and Communication TechnologyHuman Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 2103 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf