kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 16) Show all publications
Sorkhei, M., Matsoukas, C., Fredin Haslum, J., Konuk, E. & Smith, K. (2025). k-NN as a Simple and Effective Estimator of Transferability. Transactions on Machine Learning Research, 2025-October
Open this publication in new window or tab >>k-NN as a Simple and Effective Estimator of Transferability
Show others...
2025 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2025-OctoberArticle in journal (Refereed) Published
Abstract [en]

How well can one expect transfer learning to work in a new setting where the domain is shifted, the task is different, and the architecture changes? Many transfer learning metrics have been proposed to answer this question. But how accurate are their predictions in a realistic new setting? We conducted an extensive evaluation involving over 42,000 experiments comparing 23 transferability metrics across 16 different datasets to assess their ability to predict transfer performance for image classification tasks. Our findings reveal that none of the existing metrics perform well across the board. However, we find that a simple k-nearest neighbor evaluation – as is commonly used to evaluate feature quality for self-supervision – not only surpasses existing metrics, but also offers better computational efficiency and ease of implementation.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research, 2025
National Category
Computer graphics and computer vision Computer Sciences
Identifiers
urn:nbn:se:kth:diva-372408 (URN)2-s2.0-105018634464 (Scopus ID)
Note

QC 20251106

Available from: 2025-11-06 Created: 2025-11-06 Last updated: 2025-11-06Bibliographically approved
Huix, J. P., Ganeshan, A. R., Fredin Haslum, J., Söderberg, M., Matsoukas, C. & Smith, K. (2024). Are Natural Domain Foundation Models Useful for Medical Image Classification?. In: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024: . Paper presented at 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Waikoloa, United States of America, Jan 4 2024 - Jan 8 2024 (pp. 7619-7628). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Are Natural Domain Foundation Models Useful for Medical Image Classification?
Show others...
2024 (English)In: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 7619-7628Conference paper, Published paper (Refereed)
Abstract [en]

The deep learning field is converging towards the use of general foundation models that can be easily adapted for diverse tasks. While this paradigm shift has become common practice within the field of natural language processing, progress has been slower in computer vision. In this paper we attempt to address this issue by investigating the transferability of various state-of-the-art foundation models to medical image classification tasks. Specifically, we evaluate the performance of five foundation models, namely Sam, Seem, Dinov2, BLIP, and OpenCLIP across four well-established medical imaging datasets. We explore different training settings to fully harness the potential of these models. Our study shows mixed results. Dinov2 consistently outperforms the standard practice of ImageNet pretraining. However, other foundation models failed to consistently beat this established baseline indicating limitations in their transferability to medical image classification tasks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Algorithms, Algorithms, and algorithms, Applications, Biomedical / healthcare / medicine, Datasets and evaluations, formulations, Machine learning architectures
National Category
Computer Sciences Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-350585 (URN)10.1109/WACV57701.2024.00746 (DOI)001222964607075 ()2-s2.0-85184972028 (Scopus ID)
Conference
2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Waikoloa, United States of America, Jan 4 2024 - Jan 8 2024
Note

Part of ISBN 9798350318920

QC 20240718

Available from: 2024-07-18 Created: 2024-07-18 Last updated: 2025-12-08Bibliographically approved
Matsoukas, C. (2024). Artificial Intelligence for Medical Image Analysis with Limited Data. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Artificial Intelligence for Medical Image Analysis with Limited Data
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Artificial intelligence (AI) is progressively influencing business, science, and society, leading to major socioeconomic changes. However, its application in real-world problems varies significantly across different sectors. One of the primary challenges limiting the widespread adoption of AI in certain areas is data availability. Medical image analysis is one of these domains, where the process of gathering data and labels is often challenging or even infeasible due to legal and privacy concerns, or due to the specific characteristics of diseases. Logistical obstacles, expensive diagnostic methods and the necessity for invasive procedures add to the difficulty of data collection. Even when ample data exists, the substantial cost and logistical hurdles in acquiring expert annotations pose considerable challenges. Thus, there is a pressing need for the development of AI models that can operate in low-data settings.

In this thesis, we explore methods that improve the generalization and robustness of models when data availability is limited. We highlight the importance of model architecture and initialization, considering their associated assumptions and biases, to determine their effectiveness in such settings. We find that models with fewer built-in assumptions in their architecture need to be initialized with pre-trained weights, executed via transfer learning. This prompts us to explore how well transfer learning performs when models are initially trained in the natural domains, where data is abundant, before being used for medical image analysis where data is limited. We identify key factors responsible for transfer learning’s efficacy, and explore its relationship with data size, model architecture, and the distance between the target domain and the one used for pretraining. In cases where expert labels are scarce, we introduce the concept of complementary labels as the means to expand the labeling set. By providing information about other objects in the image, these labels help develop richer representations, leading to improved performance in low-data regimes. We showcase the utility of these methods by streamlining the histopathology-based assessment of chronic kidney disease in an industrial pharmaceutical setting, reducing the turnaround time of study evaluations by 97%. Our results demonstrate that AI models developed for low data regimes are capable of delivering industrial-level performance, proving their practical use in drug discovery and healthcare.

Abstract [sv]

Artificiell intelligens (AI) påverkar gradvis allt fler domäner såsom affärsvärlden, vetenskapsvärlden och samhället i stort, vilket leder till stora socioekonomiska förändringar.Dock varierar dess tillämpning i verkliga problem avsevärt mellan olika sektorer.En av de främsta utmaningarna som begränsar den breda adoptionen av AI inom vissa områden är tillgången på data.Analys av medicinska bilder är en av dessa domäner, där möjligheten att samla data och annoteringar ofta är begränsad eller till och med omöjlig på grund av juridiska och integritetsmässiga skäl, eller på grund av specifika sjukdomskaraktäristiska problem.Logistiska hinder, dyra diagnostiska metoder och behovet av invasiva procedurer försvårar ytterligare datainsamling.Även när det finns gott om data utgör den betydande kostnaden och logistiska hinder för att skaffa expertannotationer betydande utmaningar.Således finns det ett tydligt behov för utvecklingen av AI-modeller som kan även fungera i med begränsade mängder data.

I denna avhandling utforskar vi metoder som förbättrar generaliseringen och robustheten hos modeller när tillgången på data är begränsad.Vi betonar vikten av modellarkitektur och initialisering, med fokus på aspekter som inbyggda antaganden, för att avgöra deras effektivitet under sådana förhållanden.Vi finner att modeller med färre inbyggda antaganden i sin arkitektur behöver initialiseras med förtränade vikter, genomfört via överföringsinlärning.Detta leder oss till att utforska hur väl överföringsinlärning presterar när modeller initialt tränas inom de naturliga domänerna, där data är rikligt tillgänglig, innan de används för analys av medicinska bilder där data är begränsad.Vi identifierar nyckelfaktorer som påverkar överföringsinlärningens effektivitet och utforskar påverkan som datasetsstorlek, modellarkitektur och avståndet mellan måldomänen och den som används för förträning.I fall där få expertannoteringar är tillgängliga introducerar vi konceptet kompletterande annoteringar, som en strategi för att utöka annoteringssättet.Genom att tillhandahålla information om andra objekt i bilden hjälper dessa annoteringar till att utveckla rikare representationer, vilket leder till förbättrad prestanda i domäner med begränsade mängder data.Vi visar användbarheten av dessa metoder genom att effektivisera histopatologi-baserad utvärderingen av kronisk njursjukdom i en industriell miljö, vilket reducerar tiden för studieutvärderingar med 97%.Våra resultat demonstrerar att AI-modeller utvecklade för förhållanden med små datamängder är kapabla att leverera effektivisering i industriell relevanta situationer, vilket visar på dess praktiska användbarhet inom läkemedelsupptäckt och hälsovård.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. xi, 109
Series
TRITA-EECS-AVL ; 2024:48
National Category
Medical Imaging Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-346236 (URN)978-91-8040-928-5 (ISBN)
Public defence
2024-05-30, Kollegiesalen, Brinellvägen 6, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20240508

Available from: 2024-05-08 Created: 2024-05-07 Last updated: 2025-02-09Bibliographically approved
Fredin Haslum, J., Matsoukas, C., Leuchowius, K.-J. & Smith, K. (2024). Bridging Generalization Gaps in High Content Imaging Through Online Self-Supervised Domain Adaptation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2024,: . Paper presented at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 03-08 January 2024 (pp. 7723-7732).
Open this publication in new window or tab >>Bridging Generalization Gaps in High Content Imaging Through Online Self-Supervised Domain Adaptation
2024 (English)In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2024,, 2024, p. 7723-7732Conference paper, Published paper (Refereed)
Abstract [en]

High Content Imaging (HCI) plays a vital role in modern drug discovery and development pipelines, facilitating various stages from hit identification to candidate drug characterization. Applying machine learning models to these datasets can prove challenging as they typically consist of multiple batches, affected by experimental variation, especially if different imaging equipment have been used. Moreover, as new data arrive, it is preferable that they are analyzed in an online fashion. To overcome this, we propose CODA, an online self-supervised domain adaptation approach. CODA divides the classifier’s role into a generic feature extractor and a task-specific model. We adapt the feature extractor’s weights to the new domain using cross-batch self-supervision while keeping the task-specific model unchanged. Our results demonstrate that this strategy significantly reduces the generalization gap, achieving up to a 300% improvement when applied to data from different labs utilizing different microscopes. CODA can be applied to new, unlabeled out-of-domain data sources of different sizes, from a single plate to multiple experimental batches.

National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-346570 (URN)10.1109/WACV57701.2024.00756 (DOI)001222964607085 ()2-s2.0-85192009362 (Scopus ID)
Conference
the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 03-08 January 2024
Note

QC 20240522

Available from: 2024-05-17 Created: 2024-05-17 Last updated: 2025-12-08Bibliographically approved
Solomonidou, A., Malaska, M. J., Lopes, R. M., Coustenis, A., Schoenfeld, A. M., Schmitt, B., . . . Elachi, C. (2024). Detailed chemical composition analysis of the Soi crater region on Titan. Icarus, 421, Article ID 116215.
Open this publication in new window or tab >>Detailed chemical composition analysis of the Soi crater region on Titan
Show others...
2024 (English)In: Icarus, ISSN 0019-1035, E-ISSN 1090-2643, Vol. 421, article id 116215Article in journal (Refereed) Published
Abstract [en]

The Soi crater region (0° to 60°N, 180°W to −110°W), which includes the well-preserved Soi crater in its center, spans a region from Titan's aeolian-dominated equatorial regions to fluvially-dominated high northern latitudes. This provides a rich diversity of landscapes, one that is also representative of the diversity encountered across Titan. Schoenfeld et al. (2023) mapped this region at 1:800,000 scale and produced a geomorphological map showing that the area consists of 22 types of geomorphological units. The Visual and Infrared Mapping Spectrometer (VIMS) coverage of the region enabled the detailed analysis of spectra of 261 different locations using a radiative transfer technique and a mixing model, yielding compositional constraints on Titan's optical surface layer. Additional constraints on composition on the near-surface substrate were obtained from microwave emissivity. We have derived combinations of top surface materials between dark materials, tholins, water-ice, and methane suggesting that dark mobile organic material at equatorial and high latitudes indicates “young” terrains and compositions, while tholin/water-ice mixtures that dominate areas around latitude 35°N show a material that is older plains deposits that we interpret to be the end stage of aeolian and fluvial transport and deposition. We found no spectral evidence of CO2, HC3N, and NH3 ice. We use the stratigraphic relations between the various mapping units and the relation between the geomorphology and the composition of the surface layers to build hypotheses on the origin and evolution of the regional geology. We suggest that sedimentary deposits, likely aeolian, are dominant in the region with fluvial activity and leaching changing the nature of the top surfaces of the midlatitude areas of the Soi crater region.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Icy satellites, Ocean worlds, Radiative transfer, Surface composition, Titan
National Category
Astronomy, Astrophysics and Cosmology Geology
Identifiers
urn:nbn:se:kth:diva-351915 (URN)10.1016/j.icarus.2024.116215 (DOI)001288253500001 ()2-s2.0-85200225065 (Scopus ID)
Note

QC 20240827

Available from: 2024-08-19 Created: 2024-08-19 Last updated: 2024-09-05Bibliographically approved
Konuk, E., Matsoukas, C., Sorkhei, M., Lertsiravarameth, P. & Smith, K. (2024). Learning from Offline Foundation Features with Tensor Augmentations. In: A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak and C. Zhang (Ed.), Advances in Neural Information Processing Systems 37 (NeurIPS 2024): . Paper presented at NeurIPS 2024, the Thirty-Eighth Annual Conference on Neural Information Processing Systems, Vancouver, December 10-15, 2024. Curran Associates
Open this publication in new window or tab >>Learning from Offline Foundation Features with Tensor Augmentations
Show others...
2024 (English)In: Advances in Neural Information Processing Systems 37 (NeurIPS 2024) / [ed] A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak and C. Zhang, Curran Associates , 2024Conference paper, Published paper (Refereed)
Abstract [en]

We introduce Learning from Offline Foundation Features with Tensor Augmentations (LOFF-TA), an efficient training scheme designed to harness the capabilities of foundation models in limited resource settings where their direct development is not feasible. LOFF-TA involves training a compact classifier on cached feature embeddings from a frozen foundation model, resulting in up to 37× faster training and up to 26× reduced GPU memory usage. Because the embeddings of augmented images would be too numerous to store, yet the augmentation process is essential for training, we propose to apply tensor augmentations to the cached embeddings of the original non-augmented images. LOFF-TA makes it possible to leverage the power of foundation models, regardless of their size, in settings with limited computational capacity. Moreover, LOFF-TA can be used to apply foundation models to high-resolution images without increasing compute. In certain scenarios, we find that training with LOFF-TA yields better results than directly fine-tuning the foundation model.

Place, publisher, year, edition, pages
Curran Associates, 2024
Keywords
Adaptation, Transfer learning, Foundation models, Augmentation
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-354832 (URN)2-s2.0-105000782383 (Scopus ID)
Conference
NeurIPS 2024, the Thirty-Eighth Annual Conference on Neural Information Processing Systems, Vancouver, December 10-15, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20250408

Available from: 2024-10-14 Created: 2024-10-14 Last updated: 2025-04-08Bibliographically approved
Fredin Haslum, J., Matsoukas, C., Leuchowius, K.-J., Müllers, E. & Smith, K. (2023). Metadata-guided Consistency Learning for High Content Images. In: Proceedings of Machine Learning Research, Volume 227: Medical Imaging with Deep Learning: . Paper presented at 6th International Conference on Medical Imaging with Deep Learning, MIDL 2023, Nashville, United States of America, Jul 10 2023 - Jul 12 2023. ML Research Press
Open this publication in new window or tab >>Metadata-guided Consistency Learning for High Content Images
Show others...
2023 (English)In: Proceedings of Machine Learning Research, Volume 227: Medical Imaging with Deep Learning, ML Research Press , 2023Conference paper, Published paper (Refereed)
Abstract [en]

High content imaging assays can capture rich phenotypic response data for large sets of compound treatments, aiding in the characterization and discovery of novel drugs. However, extracting representative features from high content images that can capture subtle nuances in phenotypes remains challenging. The lack of high-quality labels makes it difficult to achieve satisfactory results with supervised deep learning. Self-Supervised learning methods have shown great success on natural images, and offer an attractive alternative also to microscopy images. However, we find that self-supervised learning techniques underperform on high content imaging assays. One challenge is the undesirable domain shifts present in the data known as batch effects, which are caused by biological noise or uncontrolled experimental conditions. To this end, we introduce Cross-Domain Consistency Learning (CDCL), a self-supervised approach that is able to learn in the presence of batch effects. CDCL enforces the learning of biological similarities while disregarding undesirable batch-specific signals, leading to more useful and versatile representations. These features are organised according to their morphological changes and are more useful for downstream tasks – such as distinguishing treatments and mechanism of action.

Place, publisher, year, edition, pages
ML Research Press, 2023
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-346566 (URN)001221108600055 ()2-s2.0-85189329755 (Scopus ID)
Conference
6th International Conference on Medical Imaging with Deep Learning, MIDL 2023, Nashville, United States of America, Jul 10 2023 - Jul 12 2023
Note

QC 20240521

Available from: 2024-05-17 Created: 2024-05-17 Last updated: 2025-02-27Bibliographically approved
Liu, Y., Matsoukas, C., Strand, F., Azizpour, H. & Smith, K. (2023). PatchDropout: Economizing Vision Transformers Using Patch Dropout. In: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV): . Paper presented at 23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), JAN 03-07, 2023, Waikoloa, HI (pp. 3942-3951). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>PatchDropout: Economizing Vision Transformers Using Patch Dropout
Show others...
2023 (English)In: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 3942-3951Conference paper, Published paper (Refereed)
Abstract [en]

Vision transformers have demonstrated the potential to outperform CNNs in a variety of vision tasks. But the computational and memory requirements of these models prohibit their use in many applications, especially those that depend on high-resolution images, such as medical image classification. Efforts to train ViTs more efficiently are overly complicated, necessitating architectural changes or intricate training schemes. In this work, we show that standard ViT models can be efficiently trained at high resolution by randomly dropping input image patches. This simple approach, PatchDropout, reduces FLOPs and memory by at least 50% in standard natural image datasets such as IMAGENET, and those savings only increase with image size. On CSAW, a high-resolution medical dataset, we observe a 5. savings in computation and memory using PatchDropout, along with a boost in performance. For practitioners with a fixed computational or memory budget, PatchDropout makes it possible to choose image resolution, hyperparameters, or model size to get the most performance out of their model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-333235 (URN)10.1109/WACV56688.2023.00394 (DOI)000971500204006 ()2-s2.0-85149011721 (Scopus ID)
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), JAN 03-07, 2023, Waikoloa, HI
Note

QC 20230731

Available from: 2023-07-31 Created: 2023-07-31 Last updated: 2025-02-07Bibliographically approved
Matsoukas, C., Fredin Haslum, J., Sorkhei, M., Soderberg, M. & Smith, K. (2022). What Makes Transfer Learning Work for Medical Images: Feature Reuse & Other Factors. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR): . Paper presented at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), JUN 18-24, 2022, New Orleans, LA (pp. 9215-9224). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>What Makes Transfer Learning Work for Medical Images: Feature Reuse & Other Factors
Show others...
2022 (English)In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 9215-9224Conference paper, Published paper (Refereed)
Abstract [en]

Transfer learning is a standard technique to transfer knowledge from one domain to another. For applications in medical imaging, transfer from ImageNet has become the de-facto approach, despite differences in the tasks and image characteristics between the domains. However, it is unclear what factors determine whether - and to what extent transfer learning to the medical domain is useful. The longstanding assumption that features from the source domain get reused has recently been called into question. Through a series of experiments on several medical image benchmark datasets, we explore the relationship between transfer learning, data size, the capacity and inductive bias of the model, as well as the distance between the source and target domain. Our findings suggest that transfer learning is beneficial in most cases, and we characterize the important role feature reuse plays in its success.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
IEEE Conference on Computer Vision and Pattern Recognition, ISSN 1063-6919
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-322794 (URN)10.1109/CVPR52688.2022.00901 (DOI)000870759102028 ()2-s2.0-85137378486 (Scopus ID)
Conference
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), JUN 18-24, 2022, New Orleans, LA
Note

Part of proceedings ISBN 978-1-6654-6946-3

QC 20230131

Available from: 2023-01-31 Created: 2023-01-31 Last updated: 2024-05-20Bibliographically approved
Matsoukas, C., Hernandez, A. B., Liu, Y., Dembrower, K., Miranda, G., Konuk, E., . . . Smith, K. (2020). Adding seemingly uninformative labels helps in low data regimes. In: 37th International Conference on Machine Learning, ICML 2020: . Paper presented at 37th International Conference on Machine Learning, ICML 2020, 13 July 2020 through 18 July 2020 (pp. 6731-6740). International Machine Learning Society (IMLS)
Open this publication in new window or tab >>Adding seemingly uninformative labels helps in low data regimes
Show others...
2020 (English)In: 37th International Conference on Machine Learning, ICML 2020, International Machine Learning Society (IMLS) , 2020, p. 6731-6740Conference paper, Published paper (Refereed)
Abstract [en]

Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes In this work, we consider a task that requires difficult-To-obtain expert annotations: Tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.

Place, publisher, year, edition, pages
International Machine Learning Society (IMLS), 2020
Keywords
Image segmentation, Machine learning, Data settings, Expert annotations, Large datasets, Mammography images, Multi-class problems, Training example, Tumor segmentation, Large dataset
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-302861 (URN)2-s2.0-85105183400 (Scopus ID)
Conference
37th International Conference on Machine Learning, ICML 2020, 13 July 2020 through 18 July 2020
Note

QC 20211002

Available from: 2021-10-02 Created: 2021-10-02 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-1401-3497

Search in DiVA

Show all publications