kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adding Seemingly Uninformative Labels Helps in Low Data Regimes
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab. AstraZeneca, Gothenburg, Sweden.ORCID iD: 0000-0003-1401-3497
KTH, School of Electrical Engineering and Computer Science (EECS).
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0003-0101-1505
Karolinska Institutet, Stockholm, Sweden; Capio Sankt Göran Hospital, Stockholm, Sweden.
Show others and affiliations
2020 (English)In: Proceedings of Machine Learning Research - International Conference on Machine Learning, ICML 2020, ML Research Press , 2020, p. 6775-6784Conference paper, Published paper (Refereed)
Abstract [en]

Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce – is there an advantage to learning with additional labels in low-data regimes? In this work, we consider a task that requires difficult-to-obtain expert annotations: tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.

Place, publisher, year, edition, pages
ML Research Press , 2020. p. 6775-6784
Series
Proceedings of Machine Learning Research ; 119
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-373868Scopus ID: 2-s2.0-105022421154OAI: oai:DiVA.org:kth-373868DiVA, id: diva2:2020651
Conference
37th International Conference on Machine Learning, ICML 2020, Virtual, Online, NA, July 13-18, 2020
Note

Not duplicate with diva 1599878

QC 20251211

Available from: 2025-12-11 Created: 2025-12-11 Last updated: 2025-12-11Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Scopusfulltext

Authority records

Matsoukas, ChristosBou Hernandez, Albert I.Liu, YueMiranda, GiseleKonuk, EmirFredin Haslum, JohanSmith, Kevin

Search in DiVA

By author/editor
Matsoukas, ChristosBou Hernandez, Albert I.Liu, YueMiranda, GiseleKonuk, EmirFredin Haslum, JohanSmith, Kevin
By organisation
Computational Science and Technology (CST)Science for Life Laboratory, SciLifeLabSchool of Electrical Engineering and Computer Science (EECS)
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 11 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf