Ändra sökning
Avgränsa sökresultatet
1 - 9 av 9
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Arlinger, Stig
    et al.
    Nordqvist, Peter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Öberg, Marie
    International Outcome Inventory for Hearing Aids: Data From a Large Swedish Quality Register Database2017Ingår i: American Journal of Audiology, ISSN 1059-0889, E-ISSN 1558-9137, Vol. 26, nr 3, s. 443-450Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Purpose: The purpose of this study was to analyze a database of completed International Outcome Inventory for Hearing Aids (IOI-HA) questionnaires obtained from over 100,000 clients fitted with new hearing aids in Sweden during the period of 2012-2016. Mean IOI-HA total scores were correlated with degree of hearing loss, unilateral versus bilateral fitting, first-time versus return clients, gender, and variation among dispensing clinics. The correlations with expectations, service quality, and technical functioning of the hearing aids were also analyzed. Method: Questionnaires containing the 7 IOI-HA items as well as questions concerning some additional issues were mailed to clients 3-6 months after fitting of new hearing aids. The questionnaires were returned to and analyzed by an independent research institute. Results: More than 100 dispensing clinics nationwide take part in this project. A response rate of 52.6% resulted in 106,631 data sets after excluding incomplete questionnaires. Forty-six percent of the responders were women, and 54% were men. The largest difference in mean score (0.66) was found for the IOI-HA item "use" between return clients and first-time users. Women reported significantly higher (better) scores for the item "impact on others" compared with men. The bilaterally fitted subgroup reported significantly higher scores for all 7 items compared with the unilaterally fitted subgroup. Experienced users produced higher scores on benefit and satisfaction items, whereas first-time users gave higher scores for residual problems. No correlation was found between mean IOI-HA total score and average hearing threshold level (pure-tone average [ PTA]). Mean IOI-HA total scores were found to correlate significantly with perceived service quality of the dispensing center and with the technical functionality of the hearing aids. Conclusions: When comparing mean IOI-HA total scores from different studies or between groups, differences with regard to hearing aid experience, gender, and unilateral versus bilateral fitting have to be considered. No correlation was found between mean IOI-HA total score and degree of hearing loss in terms of PTA. Thus, PTA is not a reliable predictor of benefit and satisfaction of hearing aid provision as represented by the IOI-HA items. Identification of a specific lower fence in PTA for hearing aid candidacy is therefore to be avoided. Large differences were found in mean IOI-HA total scores related to different dispensing centers.

  • 2.
    Beskow, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Engwall, Olov
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Granström, Björn
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Nordqvist, Peter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Wik, Preben
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Visualization of speech and audio for hearing-impaired persons2008Ingår i: Technology and Disability, ISSN 1055-4181, Vol. 20, nr 2, s. 97-107Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Speech and sounds are important sources of information in our everyday lives for communication with our environment, be it interacting with fellow humans or directing our attention to technical devices with sound signals. For hearing impaired persons this acoustic information must be supplemented or even replaced by cues using other senses. We believe that the most natural modality to use is the visual, since speech is fundamentally audiovisual and these two modalities are complementary. We are hence exploring how different visualization methods for speech and audio signals may support hearing impaired persons. The goal in this line of research is to allow the growing number of hearing impaired persons, children as well as the middle-aged and elderly, equal participation in communication. A number of visualization techniques are proposed and exemplified with applications for hearing impaired persons.

  • 3.
    Beskow, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Granström, Björn
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Nordqvist, Peter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT.
    Al Moubayed, Samer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Salvi, Giampiero
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT. KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Herzke, Tobias
    Schulz, Arne
    Hearing at Home: Communication support in home environments for hearing impaired persons2008Ingår i: INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2008, s. 2203-2206Konferensbidrag (Refereegranskat)
    Abstract [en]

    The Hearing at Home (HaH) project focuses on the needs of hearing-impaired people in home environments. The project is researching and developing an innovative media-center solution for hearing support, with several integrated features that support perception of speech and audio, such as individual loudness amplification, noise reduction, audio classification and event detection, and the possibility to display an animated talking head providing real-time speechreading support. In this paper we provide a brief project overview and then describe some recent results related to the audio classifier and the talking head. As the talking head expects clean speech input, an audio classifier has been developed for the task of classifying audio signals as clean speech, speech in noise or other. The mean accuracy of the classifier was 82%. The talking head (based on technology from the SynFace project) has been adapted for German, and a small speech-in-noise intelligibility experiment was conducted where sentence recognition rates increased from 3% to 17% when the talking head was present.

  • 4.
    Nordqvist, Peter
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    Sound Classification in Hearing Instruments2004Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    A variety of algorithms intended for the new generation of hearing aids is presented in this thesis. The main contribution of this work is the hidden Markov model (HMM) approach to classifying listening environments. This method is efficient and robust and well suited for hearing aid applications. This thesis shows that several advanced classification methods can be implemented in digital hearing aids with reasonable requirements on memory and calculation resources.

    A method for analyzing complex hearing aid algorithms is presented. Data from each hearing aid and listening environment is displayed in three different forms: (1) Effective temporal characteristics (Gain-Time), (2) Effective compression characteristics (Input-Output), and (3) Effective frequency response (Insertion Gain). The method works as intended. Changes in the behavior of a hearing aid can be seen under realistic listening conditions. It is possible that the proposed method of analyzing hearing instruments generates too much information for the user.

    An automatic gain controlled (AGC) hearing aid algorithm adapting to two sound sources in the listening environment is presented. The main idea of this algorithm is to: (1) adapt slowly (in approximately 10 seconds) to varying listening environments, e.g. when the user leaves a disciplined conference for a multi-babble coffee-break; (2) switch rapidly(in about 100 ms) between different dominant sound sources within one listening situation, such as the change from the user's own voice to a distant speaker's voice in a quiet conference room; (3) instantly reduce gain for strong transient sounds and then quickly return to the previous gain setting; and (4) not change the gain in silent pauses but instead keep the gain setting of the previous sound source. An acoustic evaluation shows that the algorithm works as intended.

    A system for listening environment classification in hearing aids is also presented. The task is to automatically classify three different listening environments: 'speech in quiet', 'speech in traffic', and 'speech in babble'. The study shows that the three listening environments can be robustly classified at a variety of signal-to-noise ratios with only a small set of pre-trained source HMMs. The measured classification hit rate was 96.7-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False alarm rates were0.2-1.7% in these tests. The study also shows that the system can be implemented with the available resources in today's digital hearing aids. Another implementation of the classifier shows that it is possible to automatically detect when the person wearing the hearing aid uses the telephone. It is demonstrated that future hearing aids may be able to distinguish between the sound of a face-to-face conversation and a telephone conversation, both in noisy and quiet surroundings. However, this classification algorithm alone may not be fast enough to prevent initial feedback problems when the user places the telephone handset at the ear.

    A method using the classifier result for estimating signal and noise spectra for different listening environments is presented. This evaluation shows that it is possible to robustly estimate signal and noise spectra given that the classifier has good performance.

    An implementation and an evaluation of a single keyword recognizer for a hearing instrument are presented. The performance for the best parameter setting gives 7e-5 [1/s] in false alarm rate, i.e. one false alarm for every four hours of continuous speech from the user, 100% hit rate for an indoors quiet environment, 71% hit rate for an outdoors/traffic environment and 50% hit rate for a babble noise environment. The memory resource needed for the implemented system is estimated to 1820 words (16-bits). Optimization of the algorithm together with improved technology will inevitably make it possible to implement the system in a digital hearing aid within the next couple of years. A solution to extend the number of keywords and integrate the system with a sound environment classifier is also outlined.

  • 5.
    Nordqvist, Peter
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    The behaviour of non-linear (WDRC) hearinginstruments under realistic simulated listening conditions2000Rapport (Övrigt vetenskapligt)
    Abstract [en]

    This work attempts to illustrate some important practical consequences of the characteristics of nonlinear wide dynamic range compression (WDRC) hearing instruments in common conversational listening situations. Corresponding input and output signal are recorded simultaneously, using test signals consisting of conversation between a hearing aid wearer and a nonhearing aid wearer in two different listening situations, quiet and outdoors in fluctuating traffic noise. The effective insertion gain frequency response is displayed for each of the two voice sources in each of the simulated listening situations. The effective compression is also illustrated showing the gain adaptation between two alternating voice sources and the slow adaptation to changing overall acoustic conditions. These nonlinear effects are exemplified using four commercially available hearing instruments. Three of the hearing aids are digital and one is analogue.

  • 6.
    Nordqvist, Peter
    et al.
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Leijon, Arne
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    An efficient robust sound classification algorithm for hearing aids2004Ingår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 115, nr 6, s. 3033-3041Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An efficient robust sound classification algorithm based on hidden Markov models is presented. The system would enable a hearing aid to automatically change its behavior for differing listening environments according to the user's preferences. This work attempts to distinguish between three listening environment categories: speech in traffic noise, speech in babble, and clean speech, regardless of the signal-to-noise ratio. The classifier uses only the modulation characteristics of the signal. The classifier ignores the absolute sound pressure level and the absolute spectrum shape, resulting in an algorithm that is robust against irrelevant acoustic variations. The measured classification hit rate was 96.7%-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False-alarm rates were 0.2%-1.7% in these tests. The algorithm is robust and efficient and consumes a small amount of instructions and memory. It is fully possible to implement-the classifier in a DSP-based hearing instrument.

  • 7.
    Nordqvist, Peter
    et al.
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    Leijon, Arne
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    Automatic classification of the telephone listening environment in a hearing aid2002Ingår i: Trita-TMH / Royal Institute of Technology, Speech, Music and Hearing, ISSN 1104-5787, Vol. 43, nr 1, s. 45-49Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An algorithm is developed for automatic classification of the telephone-listening environment in a hearing instrument. The system would enable the hearing aid to automatically change its behavior when it is used for a telephone conversation (e.g., decrease the amplification in the hearing aid, or adapt the feedback suppression algorithm for reflections from the telephone handset). Two listening environments are included in the classifier. The first is a telephone conversation in quiet or in traffic noise and the second is a face-to-face conversation in quiet or in traffic. Each listening environment is modeled with two or three discrete Hidden Markov Models. The probabilities for the different listening environments are calculated with the forward algorithm for each frame of the input sound, and are compared with each other in order to detect the telephone-listening environment. The results indicate that the classifier can distinguish between the two listening environments used in the test material: telephone conversation and face-to-face conversation.

  • 8.
    Nordqvist, Peter
    et al.
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Leijon, Arne
    KTH, Tidigare Institutioner, Tal, musik och hörsel.
    Hearing-aid automatic gain control adapting to two sound sources in the environment, using three time constants2004Ingår i: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 116, nr 5, s. 3152-3155Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A hearing aid AGC algorithm is presented that uses a richer representation of the sound environment than previous algorithms. The proposed algorithm is designed to (1) adapt slowly (in approximately 10 s) between different listening environments, e.g., when the user leaves a single talker lecture for a multi-babble coffee-break; (2) switch rapidly (about 100 ms) between different dominant sound sources within one listening situation, such as the change from the user's own voice to a distant speaker's voice in a quiet conference room; (3) instantly reduce gain for strong transient sounds and then quickly return to the previous gain setting; and (4) not change the gain in silent pauses but instead keep the gain setting of the previous sound source. An acoustic evaluation showed that the algorithm worked as intended. The algorithm was evaluated together with a reference algorithm in 4 pilot field test. When evaluated by nine users in a set of speech recognition tests, the algorithm showed similar results to the reference algorithm.

  • 9.
    Nordqvist, Peter
    et al.
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    Leijon, Arne
    KTH, Tidigare Institutioner                               , Signaler, sensorer och system.
    Speech Recognition in Hearing Aids2004Ingår i: EURASIP Journal on Wireless Communications and Networking, ISSN 1687-1472, E-ISSN 1687-1499Artikel i tidskrift (Övrigt vetenskapligt)
1 - 9 av 9
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf