kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (4 of 4) Show all publications
Christiansen, F., Konuk, E., Ganeshan, A. R., Welch, R., Palés Huix, J., Czekierdowski, A., . . . Epstein, E. (2025). International multicenter validation of AI-driven ultrasound detection of ovarian cancer. Nature Medicine, 31(1), 189-196
Open this publication in new window or tab >>International multicenter validation of AI-driven ultrasound detection of ovarian cancer
Show others...
2025 (English)In: Nature Medicine, ISSN 1078-8956, E-ISSN 1546-170X, Vol. 31, no 1, p. 189-196Article in journal (Refereed) Published
Abstract [en]

Ovarian lesions are common and often incidentally detected. A critical shortage of expert ultrasound examiners has raised concerns of unnecessary interventions and delayed cancer diagnoses. Deep learning has shown promising results in the detection of ovarian cancer in ultrasound images; however, external validation is lacking. In this international multicenter retrospective study, we developed and validated transformer-based neural network models using a comprehensive dataset of 17,119 ultrasound images from 3,652 patients across 20 centers in eight countries. Using a leave-one-center-out cross-validation scheme, for each center in turn, we trained a model using data from the remaining centers. The models demonstrated robust performance across centers, ultrasound systems, histological diagnoses and patient age groups, significantly outperforming both expert and non-expert examiners on all evaluated metrics, namely F1 score, sensitivity, specificity, accuracy, Cohen’s kappa, Matthew’s correlation coefficient, diagnostic odds ratio and Youden’s J statistic. Furthermore, in a retrospective triage simulation, artificial intelligence (AI)-driven diagnostic support reduced referrals to experts by 63% while significantly surpassing the diagnostic performance of the current practice. These results show that transformer-based models exhibit strong generalization and above human expert-level diagnostic accuracy, with the potential to alleviate the shortage of expert ultrasound examiners and improve patient outcomes.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Cancer and Oncology Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-371960 (URN)10.1038/s41591-024-03329-4 (DOI)001388159800001 ()39747679 (PubMedID)2-s2.0-85214010322 (Scopus ID)
Note

Not duplicate with diva 1905526

QC 20251022

Available from: 2025-10-22 Created: 2025-10-22 Last updated: 2025-10-22Bibliographically approved
Denholm, J., Hamidinekoo, A., Burlutskiy, N., Setyo, L. C., Zhang, I., Yousefi, F., . . . Qaiser, T. (2025). Virtual Histological Staining as a Tool for Extending Renal Segmentation Across Stains. Modern Pathology, 38(12), Article ID 100842.
Open this publication in new window or tab >>Virtual Histological Staining as a Tool for Extending Renal Segmentation Across Stains
Show others...
2025 (English)In: Modern Pathology, ISSN 0893-3952, E-ISSN 1530-0285, Vol. 38, no 12, article id 100842Article in journal (Refereed) Published
Abstract [en]

In renal histopathology, the routine clinical use of several histological stains presents challenges for the direct application of stain-specific deep learning-based analysis tools to whole-slide images. We present an approach to the in silico histological staining of kidney tissue where samples stained with hematoxylin and eosin (H&E) are virtually restained with periodic acid-Schiff (PAS). Our approach is underpinned by cycle-consistent generative adversarial neural networks trained on the National Unified Renal Translational Research Enterprise data set & horbar;the first UK-wide Biobank for chronic kidney disease & horbar;which features diverse data from 16 nephrology centers. Our work is divided into the following 4 main components: (1) we developed a virtual staining model, which infers PAS staining from H & E; (2) 2 board-certified pathologists assessed the virtual staining by attempting to distinguish it from real examples; (3) we trained a glomerular segmentation model using 3 independent renal segmentation data sets (Kidney Precision Medicine Project, Human BioMolecular Atlas Program [Kidney], and data by Jayapandian et al); and (4) we demonstrated the utility of virtual staining by inferring PAS staining from previously unseen H&E test images and applying our PAS-specific glomerular segmentation model. Each pathologist was able to identify 52.5% and 75.8% of the virtually stained images, respectively, showing an overlap in the variability of the authentic and synthetic staining. We discussed the utility of virtual staining in digital pathology, the need for pathology-specific testing with respect to chronic damage, and minimal changes and steps for incorporating more stains. Furthermore, alongside this article, we included complete glomerular annotations for 20 Kidney Precision Medicine Project H&E-stained slides.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
chronic kidney disease, computational pathology, generative artificial intelligence, National Unified Renal Translational Research Enterprise, renal pathology, virtual staining
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:kth:diva-375007 (URN)10.1016/j.modpat.2025.100842 (DOI)001595214700001 ()40712735 (PubMedID)2-s2.0-105013777530 (Scopus ID)
Note

QC 20260108

Available from: 2026-01-08 Created: 2026-01-08 Last updated: 2026-01-08Bibliographically approved
Huix, J. P., Ganeshan, A. R., Fredin Haslum, J., Söderberg, M., Matsoukas, C. & Smith, K. (2024). Are Natural Domain Foundation Models Useful for Medical Image Classification?. In: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024: . Paper presented at 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Waikoloa, United States of America, Jan 4 2024 - Jan 8 2024 (pp. 7619-7628). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Are Natural Domain Foundation Models Useful for Medical Image Classification?
Show others...
2024 (English)In: Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 7619-7628Conference paper, Published paper (Refereed)
Abstract [en]

The deep learning field is converging towards the use of general foundation models that can be easily adapted for diverse tasks. While this paradigm shift has become common practice within the field of natural language processing, progress has been slower in computer vision. In this paper we attempt to address this issue by investigating the transferability of various state-of-the-art foundation models to medical image classification tasks. Specifically, we evaluate the performance of five foundation models, namely Sam, Seem, Dinov2, BLIP, and OpenCLIP across four well-established medical imaging datasets. We explore different training settings to fully harness the potential of these models. Our study shows mixed results. Dinov2 consistently outperforms the standard practice of ImageNet pretraining. However, other foundation models failed to consistently beat this established baseline indicating limitations in their transferability to medical image classification tasks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Algorithms, Algorithms, and algorithms, Applications, Biomedical / healthcare / medicine, Datasets and evaluations, formulations, Machine learning architectures
National Category
Computer Sciences Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-350585 (URN)10.1109/WACV57701.2024.00746 (DOI)001222964607075 ()2-s2.0-85184972028 (Scopus ID)
Conference
2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024, Waikoloa, United States of America, Jan 4 2024 - Jan 8 2024
Note

Part of ISBN 9798350318920

QC 20240718

Available from: 2024-07-18 Created: 2024-07-18 Last updated: 2025-12-08Bibliographically approved
Christiansen, F., Konuk, E., Raju, A., Welch, R., Huix, J. P., Czekierdowski, A., . . . Epstein, E. International multicenter validation of AI-driven ultrasound detection of ovarian cancer.
Open this publication in new window or tab >>International multicenter validation of AI-driven ultrasound detection of ovarian cancer
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Ovarian lesions are common and often incidentally detected. A critical shortage of expert ultrasound examiners has raised concerns of unnecessary interventions and delayed cancer diagnoses. Deep learning has shown promising results in the detection of ovarian cancer in ultrasound images; however, external validation is lacking. In this international multicenter retrospective study, we developed and validated transformer-based neural network models using a comprehensive dataset of 17,119 ultrasound images from 3,652 patients across 20 centers in eight countries. Using a leave-one-center-out cross-validation scheme, for each center in turn, we trained a model using data from the remaining centers. The models demonstrated robust performance across centers, ultrasound systems, histological diagnoses and patient age groups, significantly outperforming both expert and non-expert examiners on all evaluated metrics, namely F1 score, sensitivity, specificity, accuracy, Cohen’s kappa, Matthew’s correlation coefficient, diagnostic odds ratio and Youden’s J statistic. Furthermore, in a retrospective triage simulation, artificial intelligence (AI)-driven diagnostic support reduced referrals to experts by 63% while significantly surpassing the diagnostic performance of the current practice. These results show that transformer-based models exhibit strong generalization and above human expert-level diagnostic accuracy, with the potential to alleviate the shortage of expert ultrasound examiners and improve patient outcomes.

Keywords
Deep learning, Generalization, External validity, Ultrasound, Ovarian cancer
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-354833 (URN)
Note

QC 20241015

Accepted for publication

Available from: 2024-10-14 Created: 2024-10-14 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0009-0008-4117-1638

Search in DiVA

Show all publications