kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Consistency Regularization Can Improve Robustness to Label Noise
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5211-6388
2021 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Consistency regularization is a commonly-used technique for semi-supervised and self-supervised learning. It is an auxiliary objective function that encourages the prediction of the network to be similar in the vicinity of the observed training samples. Hendrycks et al. (2020) have recently shown such regularization naturally brings test-time robustness to corrupted data and helps with calibration. This paper empirically studies the relevance of consistency regularization for training-time robustness to noisy labels. First, we make two interesting and useful observations regarding the consistency of networks trained with the standard cross entropy loss on noisy datasets which are: (i) networks trained on noisy data have lower consistency than those trained on clean data, and (ii) the consistency reduces more significantly around noisy-labelled training data points than correctly-labelled ones. Then, we show that a simple loss function that encourages consistency improves the robustness of the models to label noise on both synthetic (CIFAR-10, CIFAR-100) and real-world (WebVision) noise as well as different noise rates and types and achieves state-of-the-art results.

Place, publisher, year, edition, pages
2021.
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-305932OAI: oai:DiVA.org:kth-305932DiVA, id: diva2:1618489
Conference
International Conference on Machine Learning (ICML) Workshops, 2021 Workshop on Uncertainty and Robustness in Deep Learning
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20211220

Available from: 2021-12-09 Created: 2021-12-09 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(606 kB)1620 downloads
File information
File name FULLTEXT01.pdfFile size 606 kBChecksum SHA-512
9673186013f4e5e2e898c4de9fce38b84c1bec5386a43e9145116d5b8bd77f91624f5748e886a92c046ee775b848953a68c3ca066e9eed456c5bf3bed66ca9cc
Type fulltextMimetype application/pdf

Authority records

Englesson, ErikAzizpour, Hossein

Search in DiVA

By author/editor
Englesson, ErikAzizpour, Hossein
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 1624 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 326 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf