Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Consistency Regularization Can Improve Robustness to Label Noise
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.ORCID-id: 0000-0001-5211-6388
2021 (engelsk)Konferansepaper, Poster (with or without abstract) (Fagfellevurdert)
Abstract [en]

Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning. It is an auxiliary objective function that encourages the prediction of the network to be similar in the vicinity of the observed training samples. Hendrycks et al. (2020) have recently shown such regularization naturally brings test-time robustness to corrupted data and helps with calibration. This paper empirically studies the relevance of consistency regularization for training-time robustness to noisy labels. First, we make two interesting and useful observations regarding the consistency of networks trained with the standard cross entropy loss on noisy datasets which are: (i) networks trained on noisy data have lower consistency than those trained on clean data, and (ii) the consistency reduces more significantly around noisy-labelled training data points than correctly-labelled ones. Then, we show that a simple loss function that encourages consistency improves the robustness of the models to label noise on both synthetic (CIFAR-10, CIFAR-100) and real-world (WebVision) noise as well as different noise rates and types and achieves state-of-the-art results.

sted, utgiver, år, opplag, sider
2021.
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-305932OAI: oai:DiVA.org:kth-305932DiVA, id: diva2:1618489
Konferanse
International Conference on Machine Learning (ICML) Workshops, 2021 Workshop on Uncertainty and Robustness in Deep Learning
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Merknad

QC 20211220

Tilgjengelig fra: 2021-12-09 Laget: 2021-12-09 Sist oppdatert: 2025-02-07bibliografisk kontrollert

Open Access i DiVA

fulltext(606 kB)1627 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 606 kBChecksum SHA-512
9673186013f4e5e2e898c4de9fce38b84c1bec5386a43e9145116d5b8bd77f91624f5748e886a92c046ee775b848953a68c3ca066e9eed456c5bf3bed66ca9cc
Type fulltextMimetype application/pdf

Person

Englesson, ErikAzizpour, Hossein

Søk i DiVA

Av forfatter/redaktør
Englesson, ErikAzizpour, Hossein
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 1631 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

urn-nbn

Altmetric

urn-nbn
Totalt: 328 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf