kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Advanced Machine Learning Methods for Oncological Image Analysis
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging. Karolinska Institutet. (Division of Biomedical Imaging)ORCID iD: 0000-0001-5125-4682
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally-invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.

This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.

The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head-neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.

Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.

Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra-dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.

In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.

Abstract [sv]

Cancer är en global hälsoutmaning som uppskattas ansvara för cirka 10 miljoner dödsfall i hela världen, bara under året 2020. Framsteg inom medicinsk bildtagning och hårdvaruutveckling de senaste tre decennierna har banat vägen för moderna medicinska bildgivande system vars upplösningsförmåga tillåter att fånga information om tumörers anatomi, fysiologi, funktion samt metabolism. Medicinsk bildanalys har därför fått en mer betydelserik roll i klinikers dagliga rutiner inom onkologin, för bland annat screening, diagnostik, uppföljning av behandling samt icke-invasiv utvärdering av sjukdomsprognoser. Sjukvårdens behov av medicinska bilder har lett till att det nu på sjukhusen finns en enorm mängd medicinska bilder på alla moderna sjukhus. Med hänsyn till den viktiga roll medicinsk bilddata spelar i dagens sjukvård, samt den mängd manuellt arbete som behöver göras för att analysera den mängd data som genereras varje dag, så har utvecklingen av digitala verktyg för att för att automatiskt eller semi-automatiskt analysera  bilddatan alltid haft stort intresse. Därför har en rad maskininlärningsverktyg utvecklats för analys av onkologisk data, för att gripa sig an läkares repetitiva vardagssysslor.

Den här avhandlingen syftar att bidra till fältet “onkologisk bildanalys” genom att föreslå nya sätt att kvantifiera tumörers egenskaper från medicinsk bilddata. Specifikt, är denna avhandling baserad på sex artiklar där de första två har fokus att presentera nya metoder för segmentering av tumörer, och de resterande fyra ämnar att utveckla kvantitativa biomarkörer för cancerdiagnostik och prognos.

Huvudsyftet för “Studie I” har varit att utveckla en djupinlärnings-pipeline vars syfte är att fånga lungpatalogiers anatomier (inklusive lungtumörer) samt integrera detta med djupa neurala nätverk för segmentering för att nyttja det första nätverkets utfall för att förbättra segmenteringskvalitén. Den föreslagna pipelinen testades på flertalet dataset och numeriska analyser visar en överlägsna resultat för den föreslagna “prior-medvetna” djupinlärningsmetoden. “Studie II” ämnar att ta sig an ett viktig problem som övervakade segmenteringsmetoder ställs inför: ett beroende av enorma annoterade dataset. I denna studie föreslås en icke-övervakad segmenteringsmetod som baseras på konceptet “ifyllnad” (“inpainting”) för att segmentera tumörer i områdena: lungor samt huvud och hals i bilder från olika modaliteter. Den föreslagna metoden lyckas bättre än en familj väletablerade icke-oövervakade segmenteringsmodeller.

“Studie III” och “Studie IV” försöker automatiskt diskriminera benigna lungtumörer från maligna tumörer genom att analysera bilder från LDCT (lågdos-CT). I “Studie III“ föreslås ett djupt neuralt nätverk för klassificering vars grafstruktur tillåter lokal analys av tumörens inbördes heterogeniteter samt en helhetsbild från global kontextuell information. “Studie IV” försöker utvärdera noggrant utvalda metoder som grundar sig på att extrahera anatomiska särdrag från medicinska bilder. I studien jämförs konventionella “radiomics”-metoder med särdrag från neurala nätverk samt en kombination av båda på samma dataset. Resultat från studien visar att en kombination av särdrag från djupa neurala nätverk samt “radiomics” kan ge bättre resultat i klassificeringsproblemet.

“Studie V” har fokus på tidig bedömning av lungtumörers respons på behandling genom att utveckla ett set nya fysiologisk observerbara särdrag. Den presenterade metoden har använts för att kvantifiera förändringar i tumörers karaktär i PET-CT-undersökningar för att predicera patienters prognos två år efter senaste behandling. Metoden jämförts mot konventionella “radiomics” och utvärderingen visar att den föreslagna metoden ger förbättrade resultat. Till skilnad från “Studie V”, som fokuserar på att lösa ett binärt klassificeringsproblem, så försöker “Studie VI” predicera överlevnadsgraden hos patienter med lung- samt huvud och hals-cancer genom att undersöka neurala nätverk med sfäriska faltningsoperationer. Metoden jämförs mot, bland annat, “radiomics” och visar liknande resultat för analys på samma dataset, men bättre resultat för analys på olika dataset.

Sammanfattningsvis så utnyttjar de sex studierna olika medicinska bildgivande system samt en mängd olika bildbehandling- och maskininlärningstekniker för att utveckla verktyg för att kvantifierar tumörers egenskaper, som kan underlätta fastställande av diagnos och prognos.

Place, publisher, year, edition, pages
Stockholm: Universitetsservice US-AB, Sweden 2022 , 2022. , p. 147
Series
TRITA-CBH-FOU ; 2022:38
Keywords [en]
Medical Image Analysis, Machine Learning, Deep Learning, Survival Analysis, Early Response Assessment, Tumor Classification, Tumor Segmentation
National Category
Medical Imaging
Research subject
Medical Technology
Identifiers
URN: urn:nbn:se:kth:diva-316665ISBN: 978-91-8040-313-9 (electronic)OAI: oai:DiVA.org:kth-316665DiVA, id: diva2:1690639
Public defence
2022-09-30, https://kth-se.zoom.us/j/64637374028, T2, Hälsovägen 11C, Huddinge, 13:00 (English)
Opponent
Supervisors
Note

QC 2022-08-29

Available from: 2022-08-29 Created: 2022-08-26 Last updated: 2025-02-09Bibliographically approved
List of papers
1. Prior-aware autoencoders for lung pathology segmentation
Open this publication in new window or tab >>Prior-aware autoencoders for lung pathology segmentation
2022 (English)In: Medical Image Analysis, ISSN 1361-8415, E-ISSN 1361-8423, Vol. 80, p. 102491-, article id 102491Article in journal (Refereed) Published
Abstract [en]

Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion seg-mentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and re-construct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information re-garding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On av-erage, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model pro-duces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.

Place, publisher, year, edition, pages
Elsevier BV, 2022
Keywords
Lung pathology segmentation, Healthy image generation, Prior-aware deep learning
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-314851 (URN)10.1016/j.media.2022.102491 (DOI)000807749000003 ()35653902 (PubMedID)2-s2.0-85131059087 (Scopus ID)
Note

QC 20220627

Available from: 2022-06-27 Created: 2022-06-27 Last updated: 2023-03-07Bibliographically approved
2. Unsupervised Tumor Segmentation
Open this publication in new window or tab >>Unsupervised Tumor Segmentation
Show others...
(English)Manuscript (preprint) (Other academic)
Keywords
Unsupervised tumor segmentation, anomaly detection
National Category
Medical Imaging
Research subject
Applied Medical Technology
Identifiers
urn:nbn:se:kth:diva-316664 (URN)
Note

QC 20220829

Available from: 2022-08-26 Created: 2022-08-26 Last updated: 2025-02-09Bibliographically approved
3. Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features
Open this publication in new window or tab >>Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features
Show others...
2021 (English)In: Physica medica (Testo stampato), ISSN 1120-1797, E-ISSN 1724-191X, Vol. 83, p. 146-153Article in journal (Refereed) Published
Abstract [en]

Purpose: Low-Dose Computed Tomography (LDCT) is the most common imaging modality for lung cancer diagnosis. The presence of nodules in the scans does not necessarily portend lung cancer, as there is an intricate relationship between nodule characteristics and lung cancer. Therefore, benign-malignant pulmonary nodule classification at early detection is a crucial step to improve diagnosis and prolong patient survival. The aim of this study is to propose a method for predicting nodule malignancy based on deep abstract features.

Methods: To efficiently capture both intra-nodule heterogeneities and contextual information of the pulmonary nodules, a dual pathway model was developed to integrate the intra-nodule characteristics with contextual attributes. The proposed approach was implemented with both supervised and unsupervised learning schemes. A random forest model was added as a second component on top of the networks to generate the classification results. The discrimination power of the model was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUROC) metric. Results: Experiments on 1297 manually segmented nodules show that the integration of context and target supervised deep features have a great potential for accurate prediction, resulting in a discrimination power of 0.936 in terms of AUROC, which outperformed the classification performance of the Kaggle 2017 challenge winner.

Conclusion: Empirical results demonstrate that integrating nodule target and context images into a unified network improves the discrimination power, outperforming the conventional single pathway convolutional neural networks.

Place, publisher, year, edition, pages
Elsevier BV, 2021
Keywords
Pulmonary nodule, Benign-malignant classification, Deep features
National Category
Medical Imaging
Research subject
Medical Technology; Medical Technology
Identifiers
urn:nbn:se:kth:diva-296814 (URN)10.1016/j.ejmp.2021.03.013 (DOI)000657712600001 ()33774339 (PubMedID)2-s2.0-85103089344 (Scopus ID)
Note

QC 20210720

Available from: 2021-06-10 Created: 2021-06-10 Last updated: 2025-02-09Bibliographically approved
4. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
Open this publication in new window or tab >>A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images
Show others...
2021 (English)In: Frontiers in Oncology, E-ISSN 2234-943X, Vol. 11, article id 737368Article in journal (Refereed) Published
Abstract [en]

ObjectivesBoth radiomics and deep learning methods have shown great promise in predicting lesion malignancy in various image-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access to the same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventional radiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancy prediction on an open database that consists of 1297 manually delineated lung nodules. MethodsConventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images. Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lung nodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selection and class balancing, as well as separating the features learned in the nodule target region and the background/context region. By pooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two sets with respect to malignancy prediction. ResultsThe best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achieved AUROC values (mean +/- standard deviations) of 0.792 +/- 0.025, 0.801 +/- 0.018, and 0.817 +/- 0.032, respectively through 5-fold cross-validation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, as well as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based models achieved AUROC values of 0.921 +/- 0.010, 0.824 +/- 0.021, and 0.936 +/- 0.011, respectively. We achieved the best prediction accuracy from the hybrid feature set (AUROC: 0.938 +/- 0.010). ConclusionThe end-to-end deep-learning model outperforms conventional radiomics out of the box without much fine-tuning. On the other hand, fine-tuning the models lead to significant improvements in the prediction performance where the conventional and deep-feature based radiomics models achieved comparable results. The hybrid radiomics method seems to be the most promising model for lung nodule malignancy prediction in this comparative study.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
lung nodule, benign-malignant classification, lung cancer prediction, radiomics, deep classifier
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-307346 (URN)10.3389/fonc.2021.737368 (DOI)000738811400001 ()34976794 (PubMedID)2-s2.0-85122069727 (Scopus ID)
Note

QC 20220124

Available from: 2022-01-24 Created: 2022-01-24 Last updated: 2025-02-07Bibliographically approved
5. Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
Open this publication in new window or tab >>Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
Show others...
2019 (English)In: Physica medica (Testo stampato), ISSN 1120-1797, E-ISSN 1724-191X, Vol. 60, p. 58-65Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Elsevier BV, 2019
National Category
Medical Imaging
Research subject
Medical Technology
Identifiers
urn:nbn:se:kth:diva-296808 (URN)10.1016/j.ejmp.2019.03.024 (DOI)000464560200009 ()31000087 (PubMedID)2-s2.0-85063364742 (Scopus ID)
Note

QC 20220405

Available from: 2021-06-10 Created: 2021-06-10 Last updated: 2025-02-09Bibliographically approved
6. Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients
Open this publication in new window or tab >>Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients
2022 (English)In: Frontiers in Oncology, E-ISSN 2234-943X, Vol. 12, article id 870457Article in journal (Refereed) Published
Abstract [en]

ObjectiveSurvival Rate Prediction (SRP) is a valuable tool to assist in the clinical diagnosis and treatment planning of lung cancer patients. In recent years, deep learning (DL) based methods have shown great potential in medical image processing in general and SRP in particular. This study proposes a fully-automated method for SRP from computed tomography (CT) images, which combines an automatic segmentation of the tumor and a DL-based method for extracting rotational-invariant features. MethodsIn the first stage, the tumor is segmented from the CT image of the lungs. Here, we use a deep-learning-based method that entails a variational autoencoder to provide more information to a U-Net segmentation model. Next, the 3D volumetric image of the tumor is projected onto 2D spherical maps. These spherical maps serve as inputs for a spherical convolutional neural network that approximates the log risk for a generalized Cox proportional hazard model. ResultsThe proposed method is compared with 17 baseline methods that combine different feature sets and prediction models using three publicly-available datasets: Lung1 (n=422), Lung3 (n=89), and H&N1 (n=136). We observed comparable C-index scores compared to the best-performing baseline methods in a 5-fold cross-validation on Lung1 (0.59 +/- 0.03 vs. 0.62 +/- 0.04). In comparison, it slightly outperforms all methods in inter-data set evaluation (0.64 vs. 0.63). The best-performing method from the first experiment reduced its performance to 0.61 and 0.62 for Lung3 and H&N1, respectively. DiscussionThe experiments suggest that the performance of spherical features is comparable with previous approaches, but they generalize better when applied to unseen datasets. That might imply that orientation-independent shape features are relevant for SRP. The performance of the proposed method was very similar, using manual and automatic segmentation methods. This makes the proposed model useful in cases where expert annotations are not available or difficult to obtain.

Place, publisher, year, edition, pages
Frontiers Media SA, 2022
Keywords
lung cancer, tumor segmentation, spherical convolutional neural network, survival rate prediction, deep learning, Cox Proportional Hazards, DeepSurv
National Category
Ophthalmology Computer Sciences Gynaecology, Obstetrics and Reproductive Medicine
Identifiers
urn:nbn:se:kth:diva-313029 (URN)10.3389/fonc.2022.870457 (DOI)000795556500001 ()35574400 (PubMedID)2-s2.0-85130209481 (Scopus ID)
Note

QC 20220601

Available from: 2022-06-01 Created: 2022-06-01 Last updated: 2025-02-11Bibliographically approved

Open Access in DiVA

Astaraki_Kappa(9989 kB)914 downloads
File information
File name FULLTEXT01.pdfFile size 9989 kBChecksum SHA-512
82170a612a02ef91edd7add267e5521580627f0754484feb5f365d54fa3dd3e5b06f0605924e50f34e85466f3588360409798071a82f4f6cd4939a97e39d3492
Type fulltextMimetype application/pdf

Authority records

Astaraki, Mehdi

Search in DiVA

By author/editor
Astaraki, Mehdi
By organisation
Medical Imaging
Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar
Total: 915 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1350 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf