kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
k-NN as a Simple and Effective Estimator of Transferability
KTH, Centres, Science for Life Laboratory, SciLifeLab. KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).ORCID iD: 0000-0001-6204-0778
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0003-1401-3497
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0003-2920-8510
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0001-9437-4553
Show others and affiliations
2025 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2025-OctoberArticle in journal (Refereed) Published
Abstract [en]

How well can one expect transfer learning to work in a new setting where the domain is shifted, the task is different, and the architecture changes? Many transfer learning metrics have been proposed to answer this question. But how accurate are their predictions in a realistic new setting? We conducted an extensive evaluation involving over 42,000 experiments comparing 23 transferability metrics across 16 different datasets to assess their ability to predict transfer performance for image classification tasks. Our findings reveal that none of the existing metrics perform well across the board. However, we find that a simple k-nearest neighbor evaluation – as is commonly used to evaluate feature quality for self-supervision – not only surpasses existing metrics, but also offers better computational efficiency and ease of implementation.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research , 2025. Vol. 2025-October
National Category
Computer graphics and computer vision Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-372408Scopus ID: 2-s2.0-105018634464OAI: oai:DiVA.org:kth-372408DiVA, id: diva2:2012018
Note

QC 20251106

Available from: 2025-11-06 Created: 2025-11-06 Last updated: 2025-11-06Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Sorkhei, MoeinMatsoukas, ChristosFredin Haslum, JohanKonuk, EmirSmith, Kevin

Search in DiVA

By author/editor
Sorkhei, MoeinMatsoukas, ChristosFredin Haslum, JohanKonuk, EmirSmith, Kevin
By organisation
Science for Life Laboratory, SciLifeLabComputational Science and Technology (CST)
In the same journal
Transactions on Machine Learning Research
Computer graphics and computer visionComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 31 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf