Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Effect of MPEG audio compression on HMM-based speech synthesis
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
2013 (English)In: Proceedings of the 14th Annual Conference of the International Speech Communication Association: Interspeech 2013. International Speech Communication Association (ISCA), 2013, 2013, 1062-1066 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, the effect of MPEG audio compression on HMMbased speech synthesis is studied. Speech signals are encoded with various compression rates and analyzed using the GlottHMM vocoder. Objective evaluation results show that the vocoder parameters start to degrade from encoding with bitrates of 32 kbit/s or less, which is also confirmed by the subjective evaluation of the vocoder analysis-synthesis quality. Experiments with HMM-based speech synthesis show that the subjective quality of a synthetic voice trained with 32 kbit/s speech is comparable to a voice trained with uncompressed speech, but lower bit rates induce clear degradation in quality.

Place, publisher, year, edition, pages
2013. 1062-1066 p.
Series
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, ISSN 2308-457X
Keyword [en]
GlottHMM, HMM, MP3, Speech synthesis, Audio signal processing, Motion Picture Experts Group standards, Vocoders, Analysis-synthesis, HMM-based speech synthesis, Objective evaluation, Subjective evaluations, Subjective quality, Quality control
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-150864Scopus ID: 2-s2.0-84906262154OAI: oai:DiVA.org:kth-150864DiVA: diva2:745851
Conference
14th Annual Conference of the International Speech Communication Association, INTERSPEECH 2013, 25 August 2013 through 29 August 2013, Lyon, France
Note

QC 20140911

Available from: 2014-09-11 Created: 2014-09-11 Last updated: 2016-12-12Bibliographically approved
In thesis
1. Towards conversational speech synthesis: Experiments with data quality, prosody modification, and non-verbal signals
Open this publication in new window or tab >>Towards conversational speech synthesis: Experiments with data quality, prosody modification, and non-verbal signals
2017 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The aim of a text-to-speech synthesis (TTS) system is to generate a human-like speech waveform from a given input text. Current TTS sys- tems have already reached a high degree of intelligibility, and they can be readily used to read aloud a given text. For many applications, e.g. public address systems, reading style is enough to convey the message to the people. However, more recent applications, such as human-machine interaction and speech-to-speech translation, call for TTS systems to be increasingly human- like in their conversational style. The goal of this thesis is to address a few issues involved in a conversational speech synthesis system.

First, we discuss issues involve in data collection for conversational speech synthesis. It is very important to have data with good quality as well as con- tain more conversational characteristics. In this direction we studied two methods 1) harvesting the world wide web (WWW) for the conversational speech corpora, and 2) imitation of natural conversations by professional ac- tors. In former method, we studied the effect of compression on the per- formance of TTS systems. It is often the case that speech data available on the WWW is in compression form, mostly use the standard compression techniques such as MPEG. Thus in paper 1 and 2, we systematically stud- ied the effect of MPEG compression on TTS systems. Results showed that the synthesis quality indeed affect by the compression, however, the percep- tual differences are strongly significant if the compression rate is less than 32kbit/s. Even if one is able to collect the natural conversational speech it is not always suitable to train a TTS system due to problems involved in its production. Thus in later method, we asked the question that can we imi- tate the conversational speech by professional actors in recording studios. In this direction we studied the speech characteristics of acted and read speech. Second, we asked a question that can we borrow a technique from voice con- version field to convert the read speech into conversational speech. In paper 3, we proposed a method to transform the pitch contours using artificial neu- ral networks. Results indicated that neural networks are able to transform pitch values better than traditional linear approach. Finally, we presented a study on laughter synthesis, since non-verbal sounds particularly laughter plays a prominent role in human communications. In paper 4 we present an experimental comparison of state-of-the-art vocoders for the application of HMM-based laughter synthesis. 

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. 39 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:04
Keyword
Speech synthesis, MPEG compression, Voice Conversion, Artificial Neural Net- works, Laughter synthesis, HTS
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-198100 (URN)978-91-7729-235-7 (ISBN)
Presentation
2017-01-19, Fantum, Lindstedtsvägen 24, 5tr, Stockholm, 15:00 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2013-4935
Note

QC 20161213

Available from: 2016-12-19 Created: 2016-12-12 Last updated: 2016-12-19Bibliographically approved

Open Access in DiVA

No full text

Scopus

Search in DiVA

By author/editor
Bollepalli, Bajibabu
By organisation
Speech, Music and Hearing, TMH
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 36 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf