Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Segment boundaries in low latency phonetic recognition
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-3323-5311
2005 (English)In: NONLINEAR ANALYSES AND ALGORITHMS FOR SPEECH PROCESSING / [ed] Faundez Zanuy M; Janer L; Esposito A; Satue Villar A; Roure J; Espinosa Duro V, 2005, Vol. 3817, 267-276 p.Conference paper, Published paper (Refereed)
Abstract [en]

The segment boundaries produced by the Synface low latency phoneme recogniser are analysed. The precision in placing the boundaries is an important factor in the Synface system as the aim is to drive the lip movements of a synthetic face for lip-reading support. The recogniser is based on a hybrid of recurrent neural networks and hidden Markov models. In this paper we analyse the look-ahead length in the Viterbi-like decoder affects the precision of boundary placement. The properties of the entropy of the posterior probabilities estimated by the neural network are also investigated in relation to the distance of the frame from a phonetic transition.

Place, publisher, year, edition, pages
2005. Vol. 3817, 267-276 p.
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 3817
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-6152ISI: 000235839300023Scopus ID: 2-s2.0-33745452923ISBN: 3-540-31257-9 (print)OAI: oai:DiVA.org:kth-6152DiVA: diva2:10781
Conference
International Conference on Non-Linear Speech Processing Barcelona, SPAIN, APR 19-22, 2005
Note

QC 20100630

Available from: 2006-09-21 Created: 2006-09-21 Last updated: 2015-08-03Bibliographically approved
In thesis
1. Mining Speech Sounds: Machine Learning Methods for Automatic Speech Recognition and Analysis
Open this publication in new window or tab >>Mining Speech Sounds: Machine Learning Methods for Automatic Speech Recognition and Analysis
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This thesis collects studies on machine learning methods applied to speech technology and speech research problems. The six research papers included in this thesis are organised in three main areas.

The first group of studies were carried out within the European project Synface. The aim was to develop a low latency phonetic recogniser to drive the articulatory movements of a computer generated virtual face from the acoustic speech signal. The visual information provided by the face is used as hearing aid for persons using the telephone.

Paper A compares two solutions to the problem of mapping acoustic to visual information that are based on regression and classification techniques. Recurrent Neural Networks are used to perform regression while Hidden Markov Models are used for the classification task. In the second case the visual information needed to drive the synthetic face is obtained by interpolation between target values for each acoustic class. The evaluation is based on listening tests with hearing impaired subjects were the intelligibility of sentence material is compared in different conditions: audio alone, audio and natural face, audio and synthetic face driven by the different methods.

Paper B analyses the behaviour, in low latency conditions, of a phonetic recogniser based on a hybrid of Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). The focus is on the interaction between the time evolution model learnt by the RNNs and the one imposed by the HMMs.

Paper C investigates the possibility of using the entropy of the posterior probabilities estimated by a phoneme classification neural network, as a feature for phonetic boundary detection. The entropy and its time evolution are analysed with respect to the identity of the phonetic segment and the distance from a reference phonetic boundary.

In the second group of studies, the aim was to provide tools for analysing large amount of speech data in order to study geographical variations in pronunciation (accent analysis).

Paper D and Paper E use Hidden Markov Models and Agglomerative Hierarchical Clustering to analyse a data set of about 100 millions data points (5000 speakers, 270 hours of speech recordings). In Paper E, Linear Discriminant Analysis was used to determine the features that most concisely describe the groupings obtained with the clustering procedure.

The third group belongs to studies carried out during the international project MILLE (Modelling Language Learning) that aims at investigating and modelling the language acquisition process in infants.

Paper F proposes the use of an incremental form of Model Based Clustering to describe the unsupervised emergence of phonetic classes in the first stages of language acquisition. The experiments were carried out on child-directed speech expressly collected for the purposes of the project

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. xix, 87 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2006:12
Keyword
speech, machine learning, data mining, signal processing
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-4111 (URN)91-7178-446-2 (ISBN)
Public defence
2006-10-06, F3, Sing Sing, Lindstedtsvägen 26, Stockholm, 13:00
Opponent
Supervisors
Note
QC 20100630Available from: 2006-09-21 Created: 2006-09-21 Last updated: 2010-06-30Bibliographically approved

Open Access in DiVA

No full text

Scopus

Authority records BETA

Salvi, Giampiero

Search in DiVA

By author/editor
Salvi, Giampiero
By organisation
Speech, Music and Hearing, TMH
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 90 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf