kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Comparison of a deep learning risk score and standard mammographic density score for breast cancer risk prediction
KTH, Centres, Science for Life Laboratory, SciLifeLab.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-5211-6388
Show others and affiliations
2020 (English)In: Radiology, ISSN 0033-8419, E-ISSN 1527-1315, Vol. 294, no 2, p. 265-272Article in journal (Refereed) Published
Abstract [en]

Background: Most risk prediction models for breast cancer are based on questionnaires and mammographic density assessments. By training a deep neural network, further information in the mammographic images can be considered. Purpose: To develop a risk score that is associated with future breast cancer and compare it with density-based models. Materials and Methods: In this retrospective study, all women aged 40-74 years within the Karolinska University Hospital uptake area in whom breast cancer was diagnosed in 2013-2014 were included along with healthy control subjects. Network development was based on cases diagnosed from 2008 to 2012. The deep learning (DL) risk score, dense area, and percentage density were calculated for the earliest available digital mammographic examination for each woman. Logistic regression models were fitted to determine the association with subsequent breast cancer. False-negative rates were obtained for the DL risk score, age-adjusted dense area, and age-adjusted percentage density. Results: A total of 2283 women, 278 of whom were later diagnosed with breast cancer, were evaluated. The age at mammography (mean, 55.7 years vs 54.6 years; P< .001), the dense area (mean, 38.2 cm2 vs 34.2 cm2; P< .001), and the percentage density (mean, 25.6% vs 24.0%; P< .001) were higher among women diagnosed with breast cancer than in those without a breast cancer diagnosis. The odds ratios and areas under the receiver operating characteristic curve (AUCs) were higher for age-adjusted DL risk score than for dense area and percentage density: 1.56 (95% confidence interval [CI]: 1.48, 1.64; AUC, 0.65), 1.31 (95% CI: 1.24, 1.38; AUC, 0.60), and 1.18 (95% CI: 1.11, 1.25; AUC, 0.57), respectively (P< .001 for AUC). The false-negative rate was lower: 31% (95% CI: 29%, 34%), 36% (95% CI: 33%, 39%; P = .006), and 39% (95% CI: 37%, 42%; P< .001); this difference was most pronounced for more aggressive cancers. Conclusion: Compared with density-based models, a deep neural network can more accurately predict which women are at risk for future breast cancer, with a lower false-negative rate for more aggressive cancers.

Place, publisher, year, edition, pages
Radiological Society of North America Inc. , 2020. Vol. 294, no 2, p. 265-272
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
URN: urn:nbn:se:kth:diva-267834DOI: 10.1148/radiol.2019190872ISI: 000508455500006PubMedID: 31845842Scopus ID: 2-s2.0-85078538925OAI: oai:DiVA.org:kth-267834DiVA, id: diva2:1397057
Note

QC 20200227

Available from: 2020-02-27 Created: 2020-02-27 Last updated: 2024-03-15Bibliographically approved
In thesis
1. Breast cancer risk assessment and detection in mammograms with artificial intelligence
Open this publication in new window or tab >>Breast cancer risk assessment and detection in mammograms with artificial intelligence
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Breast cancer, the most common type of cancer among women worldwide, necessitates reliable early detection methods. Although mammography serves as a cost-effective screening technique, its limitations in sensitivity emphasize the need for more advanced detection approaches. Previous studies have relied on breast density, extracted directly from the mammograms, as a primary metric for cancer risk assessment, given its correlation with increased cancer risk and the masking potential of cancer. However, such a singular metric overlooks image details and spatial relationships critical for cancer diagnosis. To address these limitations, this thesis integrates artificial intelligence (AI) models into mammography, with the goal of enhancing both cancer detection and risk estimation. 

In this thesis, we aim to establish a new benchmark for breast cancer prediction using neural networks. Utilizing the Cohort of Screen-Aged Women (CSAW) dataset, which includes mammography images from 2008 to 2015 in Stockholm, Sweden, we develop three AI models to predict inherent risk, cancer signs, and masking potential of cancer. Combined, these models can e↵ectively identify women in need of supplemental screening, even after a clean exam, paving the way for better early detection of cancer. Individually, important progress has been made on each of these component tasks as well. The risk prediction model, developed and tested on a large population-based cohort, establishes a new state-of-the-art at identifying women at elevated risk of developing breast cancer, outperforming traditional density measures. The risk model is carefully designed to avoid conflating image patterns re- lated to early cancers signs with those related to long-term risk. We also propose a method that allows vision transformers to eciently be trained on and make use of high-resolution images, an essential property for models analyzing mammograms. We also develop an approach to predict the masking potential in a mammogram – the likelihood that a cancer may be obscured by neighboring tissue and consequently misdiagnosed. High masking potential can complicate early detection and delay timely interventions. Along with the model, we curate and release a new public dataset which can help speed up progress on this important task. 

Through our research, we demonstrate the transformative potential of AI in mammographic analysis. By capturing subtle image cues, AI models consistently exceed the traditional baselines. These advancements not only highlight both the individual and combined advantages of the models, but also signal a transition to an era of AI-enhanced personalized healthcare, promising more ecient resource allocation and better patient outcomes. 

Abstract [sv]

Bröstcancer, den vanligaste cancerformen bland kvinnor globalt, kräver tillförlitliga metoder för tidig upptäckt. Även om mammografi fungerar som en kostnadseffektiv screeningteknik, understryker dess begränsningar i känslighet behovet av mer avancerade detektionsmetoder. Tidigare studier har förlitat sig på brösttäthet, utvunnen direkt från mammogram, som en primär indikator för riskbedömning, givet dess samband med ökad cancerrisk och cancermaskeringspotential. Visserligen förbiser en sådan enskild indikator bildinformation och spatiala relationer vilka är kritiska för cancerdiagnos. För att möta dessa begränsningar integrerar denna avhandling artificiell intelligens (AI) modeller i mammografi, med målet att förbättra både cancerdetektion och riskbedömning. 

I denna avhandling syftar vi till att fastställa en ny standard för bröstcancer-prediktion med hjälp av neurala nätverk. Genom att utnyttja datasetet Co-hort of Screen-Aged Women (CSAW), som inkluderar mammografier från 2008 till 2015 i Stockholm, Sverige, utvecklar vi tre AI modeller för att förutsäga inneboende risk, tecken på cancer och cancermaskeringspotential. Sammantaget kan dessa modeller effektivt identifiera kvinnor som behöver kompletterande screening, även efter en undersökning där patienten klassificerats som hälsosam, vilket banar väg för tidigare upptäckt av cancer. Individuellt har viktiga framsteg också gjorts i vardera modell. Riskdetektionsmodellen, utvecklad och testad på en stor populationsbaserad kohort, etablerar en ny state-of-the-art vid identifiering av kvinnor med ökad risk att utveckla bröstcancer, och presterar bättre än traditionella täthetsmodeller. Riskmodellen är noggrant utformad för att undvika att sammanblanda bildmönster relaterade till tidiga tecken på cancer med de som relaterar till långsiktig risk. Vi föreslår också en metod som gör det möjligt för vision transformers att effektivt tränas på samt utnyttja högupplösta bilder, en väsentlig egenskap för modeller som berör mammogram. Vi utvecklar också en metod för att förutsäga maskeringspotentialen i mammogram - sannolikheten att en cancer kan döljas av närliggande vävnad och följaktligen misstolkas. Hög maskeringspotential kan komplicera tidig upptäckt och försena ingripanden. Tillsammans med modellen sammanställer och släpper vi ett nytt offentligt dataset som kan hjälpa till att påskynda framsteg inom detta viktiga område. 

Genom vår forskning demonstrerar vi den transformativa potentialen med AI i mammografianalys. Genom att fånga subtila bildledtrådar överträffar AI-modeller konsekvent de traditionella baslinjerna. Dessa framsteg belyser inte bara de individuella och kombinerade fördelarna med modellerna, utan signalerar också ett paradigmskifte mot en era av AI-förstärkt personlig hälso- och sjukvård, med ett löfte om mer effektiv resursallokering och förbättrade patientresultat. 

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. xi, 61
Series
TRITA-EECS-AVL ; 2024:2
Keywords
Mammography, AI, Breast cancer risk, Breast cancer detection, Mammografi, AI, Bröstcancerrisk, Upptäckt av bröstcancer
National Category
Engineering and Technology Radiology, Nuclear Medicine and Medical Imaging
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-340723 (URN)978-91-8040-783-0 (ISBN)
Public defence
2024-01-18, Air & Fire, Science for Life Laboratory, Tomtebodavägen 23A, Solna, 14:00 (English)
Opponent
Supervisors
Note

QC 20231212

Available from: 2023-12-12 Created: 2023-12-11 Last updated: 2024-01-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Liu, YueAzizpour, HosseinSmith, Kevin

Search in DiVA

By author/editor
Liu, YueAzizpour, HosseinSmith, Kevin
By organisation
Science for Life Laboratory, SciLifeLabRobotics, Perception and Learning, RPLComputational Science and Technology (CST)
In the same journal
Radiology
Radiology, Nuclear Medicine and Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 379 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf