Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Conditional Noise-Contrastive Estimation of Unnormalised Models
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. Rhein Westfal TH Aachen, UMIC, Aachen, Germany.; Univ Edinburgh, Edinburgh, Midlothian, Scotland..ORCID-id: 0000-0002-8044-4773
Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland..
2018 (engelsk)Inngår i: 35th International Conference on Machine Learning, ICML 2018 / [ed] Dy, J Krause, A, International Machine Learning Society (IMLS) , 2018, Vol. 80, s. 1334-1442Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Many parametric statistical models are not properly normalised and only specified up to an intractable partition function, which renders parameter estimation difficult. Examples of unnormalised models are Gibbs distributions, Markov random fields, and neural network models in unsupervised deep learning. In previous work, the estimation principle called noise-contrastive estimation (NCE) was introduced where unnormalised models are estimated by learning to distinguish between data and auxiliary noise. An open question is how to best choose the auxiliary noise distribution. We here propose a new method that addresses this issue. The proposed method shares with NCE the idea of formulating density estimation as a supervised learning problem but in contrast to NCE, the proposed method leverages the observed data when generating noise samples. The noise can thus be generated in a semi-automated manner. We first present the underlying theory of the new method, show that score matching emerges as a limiting case, validate the method on continuous and discrete valued synthetic data, and show that we can expect an improved performance compared to NCE when the data lie in a lower-dimensional manifold. Then we demonstrate its applicability in unsupervised deep learning by estimating a four-layer neural image model.

sted, utgiver, år, opplag, sider
International Machine Learning Society (IMLS) , 2018. Vol. 80, s. 1334-1442
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-318709ISI: 000683379200075Scopus ID: 2-s2.0-85057220926OAI: oai:DiVA.org:kth-318709DiVA, id: diva2:1698104
Konferanse
35th International Conference on Machine Learning (ICML), JUL 10-15, 2018, Stockholm, Sweden
Merknad

QC 20220922

Part of books: ISBN 978-151086796-3

Tilgjengelig fra: 2022-09-22 Laget: 2022-09-22 Sist oppdatert: 2023-09-22bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Scopus

Person

Ceylan, Ciwan

Søk i DiVA

Av forfatter/redaktør
Ceylan, Ciwan
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 34 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf