Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Estimation of the instantaneous pitch in speech
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
2007 (English)In: IEEE Transactions on Audio, Speech and Language Processing, ISSN 1558-7916, Vol. 15, no 3, 813-822 p.Article in journal (Refereed) Published
Abstract [en]

 An accurate estimation of the pitch is essential for many speech processing applications, such as speech synthesis, speech coding, and speech enhancement. A widely used assumption in most common pitch estimation methods is that pitch is constant over a segment of short duration. This assumption does not apply in reality and leads to inaccurate pitch estimates. In this paper, we present a method for continuous pitch estimation that is able to track fast changes. In the presented framework, the pitch is modeled by a B-spline expansion and optimized in a multistage procedure for increased robustness. The performance of the continuous optimization procedure is compared to state-of-the-art pitch estimation methods and is evaluated both for artificial speech-like signals with known pitch, and for real speech signals. The results of the experiments show that our method leads to a higher accuracy of the estimate of the pitch than state-of-the-art methods.

Place, publisher, year, edition, pages
2007. Vol. 15, no 3, 813-822 p.
Keyword [en]
instantaneous pitch, pitch estimation, pitch-synchronous processing, splines
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-9085DOI: 10.1109/TASL.2006.885242ISI: 000244318600007Scopus ID: 2-s2.0-37649002185OAI: oai:DiVA.org:kth-9085DiVA: diva2:14645
Note
QC 20100914. Uppdaterad från Submitted till Published (20100914)Available from: 2006-02-10 Created: 2006-02-10 Last updated: 2010-09-14Bibliographically approved
In thesis
1. On Prosodic Modification of Speech
Open this publication in new window or tab >>On Prosodic Modification of Speech
2006 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

Prosodic modification has become of major theoretical and practical interest in the field of speech processing research over the last decades. Algorithms for time and pitch scaling are used both for speech modification and for speech synthesis. The thesis consists of an introduction providing an overview and discussion of existing techniques for time and pitch scaling and of three research papers in this area.

In paper A a system for time synchronization of speech is presented. It performs an alignment of two utterances of the same sentence, where one of the utterances is modified in time scale so as to be synchronized with the other utterance. The system is based on Dynamic Time Warping (DTW) and the Waveform Similarity Overlap and Add (WSOLA) method, a technique for time scaling of speech signals. Paper B and C complement each other and present a novel speech representation system that facilitates both time and pitch scaling of speech signals. Paper A describes a method to warp a signal with time-varying pitch to a signal with constant pitch. For this an accurate continuous pitch track is needed. The continuous pitch track is described as a B-spline expansion with coefficients that are selected to maximize a periodicity criterion. The warping to a constant pitch corresponds to the first stage of the system presented in paper C, which describes a two-stage transform that exploits long-term periodicity to obtain a sparse representation of speech. The new system facilitates a decomposition into a voiced and unvoiced component.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. ix, 38 p.
Series
Trita-EE, ISSN 1653-5146 ; 2006:002
Identifiers
urn:nbn:se:kth:diva-621 (URN)91-7178-267-2 (ISBN)
Presentation
2006-02-20, seminarierum S3, 3 tr, Osquldas 10, Stockholm, 09:30
Opponent
Supervisors
Note
QC 20101123Available from: 2006-02-10 Created: 2006-02-10 Last updated: 2010-11-23Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Resch, BarbaraNilsson, MattiasEkman, AndersKleijn, Bastiaan
By organisation
Sound and Image Processing
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 178 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf