Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On the Estimation of Differential Entropy from Data Located on Embedded Manifolds
KTH, School of Electrical Engineering (EES), Sound and Image Processing.
2007 (English)In: IEEE Transactions on Information Theory, ISSN 0018-9448, Vol. 53, no 7, 2330-2341 p.Article in journal (Refereed) Published
Abstract [en]

Estimation of the differential entropy from observations of a random variable is of great importance for a wide range of signal processing applications such as source coding, pattern recognition, hypothesis testing, and blind source separation. In this paper, we present a method for estimation of the Shannon differential entropy that accounts for embedded manifolds. The method is based on high-rate quantization theory and forms an extension of the classical nearest-neighbor entropy estimator. The estimator is consistent in the mean square sense and an upper bound on the rate of convergence of the estimator is given. Because of the close connection between compression and Shannon entropy, the proposed method has an advantage over methods estimating the Renyi entropy. Through experiments on uniformly distributed data on known manifolds and real-world speech data we show the accuracy and usefulness of our proposed method.

Place, publisher, year, edition, pages
2007. Vol. 53, no 7, 2330-2341 p.
Keyword [en]
convergence rate, manifolds, nearest-neighbor distance, Shannon differential entropy
National Category
Fluid Mechanics and Acoustics
Identifiers
URN: urn:nbn:se:kth:diva-5787DOI: 10.1109/TIT.2007.899533ISI: 000247606300002Scopus ID: 2-s2.0-34447316139OAI: oai:DiVA.org:kth-5787DiVA: diva2:10282
Note
QC 20100914Available from: 2006-05-23 Created: 2006-05-23 Last updated: 2010-09-14Bibliographically approved
In thesis
1. Entropy and Speech
Open this publication in new window or tab >>Entropy and Speech
2006 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

In this thesis, we study the representation of speech signals and the estimation of information-theoretical measures from observations containing features of the speech signal. The main body of the thesis consists of four research papers.

Paper A presents a compact representation of the speech signal that facilitates perfect reconstruction. The representation is constituted of models, model parameters, and signal coefficients. A difference compared to existing speech representations is that we seek a compact representation by adapting the models to maximally concentrate the energy of the signal coefficients according to a selected energy concentration criterion. The individual parts of the representation are closely related to speech signal properties such as spectral envelope, pitch, and voiced/unvoiced signal coefficients, bene cial for both speech coding and modi cation.

From the information-theoretical measure of entropy, performance limits in coding and classi cation can be derived. Papers B and C discuss the estimation of di erential entropy. Paper B describes a method for estimation of the di erential entropies in the case when the set of vector observations (from the representation) lie on a lower-dimensional surface (manifold) in the embedding space. In contrast to the method presented in Paper B, Paper C introduces a method where the manifold structures are destroyed by constraining the resolution of the observation space. This facilitates the estimation of bounds on classi cation error rates even when the manifolds are of varying dimensionality within the embedding space.

Finally, Paper D investigates the amount of shared information between spectral features of narrow-band (0.3-3.4 kHz) and high-band (3.4-8 kHz) speech. The results in Paper D indicate that the information shared between the high-band and the narrow-band is insufficient for high-quality wideband speech coding (0.3-8 kHz) without transmission of extra information describing the high-band.

Place, publisher, year, edition, pages
Stockholm: KTH, 2006. xii, 38 p.
Series
Trita-EE, ISSN 1653-5146 ; 2006:014
Keyword
speech representation, energy concentration, entropy estimation, manifolds
National Category
Fluid Mechanics and Acoustics
Identifiers
urn:nbn:se:kth:diva-3990 (URN)91-628-6861-6 (ISBN)
Public defence
2006-06-08, D3, Lindstedtsvägen 5, Stockholm, 14:00
Opponent
Supervisors
Note
QC 20100914Available from: 2006-05-23 Created: 2006-05-23 Last updated: 2010-09-14Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Nilsson, MattiasKleijn, Bastiaan
By organisation
Sound and Image Processing
In the same journal
IEEE Transactions on Information Theory
Fluid Mechanics and Acoustics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 248 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf