kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MAMOC: MRI Motion Correction via Masked Autoencoding
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0002-6002-0973
KTH, Centres, Science for Life Laboratory, SciLifeLab. KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).ORCID iD: 0000-0002-6163-191X
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The presence of motion artifacts in magnetic resonance imaging (MRI) scans poses a significant challenge, where even minor patient movements can lead to artifacts that may compromise the scan’s utility. This paper introduces MAsked MOtion Correction (MAMOC),a novel method designed to address the issue of Retrospective Artifact Correction (RAC)in motion-affected MRI brain scans. MAMOC uses masked autoencoding self-supervision,transfer learning and test-time prediction to efficiently remove motion artifacts, producinghigh-fidelity, native-resolution scans. Until recently, realistic, openly available paired artifactpresentations for training and evaluating retrospective motion correction methods did notexist, making it necessary to simulate motion artifacts. Leveraging the MR-ART dataset andbigger unlabeled datasets (ADNI, OASIS-3, IXI), this work is the first to evaluate motioncorrection in MRI scans using real motion data on a public dataset, showing that MAMOCachieves improved performance over existing motion correction methods.

National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-356592DOI: 10.48550/arXiv.2405.14590OAI: oai:DiVA.org:kth-356592DiVA, id: diva2:1914472
Note

QC 20241120

Available from: 2024-11-19 Created: 2024-11-19 Last updated: 2025-02-07Bibliographically approved
In thesis
1. Generative AI for Artifact Correction and Privacy-Secure Medical Imaging
Open this publication in new window or tab >>Generative AI for Artifact Correction and Privacy-Secure Medical Imaging
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Magnetic Resonance Imaging (MRI) is a widely used non-invasive technology that provides detailed visualizations of internal body structures, particularly soft tissues such as the brain, muscles, and internal organs. However, despite its crucial role in modern healthcare, MRI faces significant obstacles that limit its effectiveness. One major challenge is that MRI scans can unintentionally reveal identifiable facial features, creating potential privacy risks if this information is misused for re-identification. This raises serious concerns about data security in an increasingly digital healthcare landscape. Additionally, MRI scans require patients to remain still during imaging, as even slight movements can degrade image quality or, in severe cases, render scans unusable, leading to costly re-scans and patient discomfort.

To address these challenges, this thesis leverages generative modeling techniques using artificial neural networks. For the first challenge, we introduce a novel data-driven remodeling-based approach to visually de-identify MRI scans while preserving medically relevant information, such as the brain. Conventional methods that remove sensitive regions (e.g. the face or ears) often disrupt downstream analysis by introducing a domain shift—a significant alteration in data distribution that hampers diagnostic accuracy. Our approach generates a realistic remodeling of these sensitive areas, maintaining privacy while preserving diagnostic utility and downstream task performance.

For the second challenge, we develop techniques to remove artifacts from MRI scans, allowing the recovery of scans that would otherwise be unusable. By integrating 3D vision transformers with self-supervised and transfer learning, our methods enhance image quality while minimizing computational cost. This reduces the need for re-scanning, improves diagnostic accuracy, and enhances patient comfort by streamlining the MRI process.

Our findings highlight the transformative potential of generative modeling in medical imaging. By addressing both privacy risks and artifact removal, this research establishes new standards for secure, efficient, and precise diagnostics. With the growing integration of AI in healthcare, these innovations lay the groundwork for scalable, privacy-conscious, and accessible diagnostic practices across various imaging modalities.

Abstract [sv]

Magnetisk resonanstomografi (MRT) är en allmänt använd, icke-invasiv teknik som ger detaljerade visualiseringar av kroppens inre strukturer, särskilt mjukvävnader som hjärnan, muskler och organ. Trots sina fördelar medför MRT två kritiska utmaningar. För det första kan rekonstruktionen av 3D-bilder från enskilda bildsnitt exponera identifierbara ansiktsdrag, vilket utgör en integritetsrisk om dessa bilder missbrukas för återidentifiering genom ansiktsigenkänning eller offentliga databaser. Detta väcker allvarliga oro för datasäkerhet i ett alltmer digitaliserat hälso- och sjukvårdslandskap. För det andra kräver MRT-skanningar att patienter förblir stilla under bildtagningen, eftersom även små rörelser kan försämra bildkvaliteten eller i allvarliga fall göra skanningar oanvändbara, vilket leder till kostsamma omtagningar och obehag för patienten.

För att möta dessa utmaningar utnyttjar denna avhandling generativ modellering och artificiella neurala nätverk. För den första utmaningen presenterar vi ett nytt metod för visuell avidentifiering av MRT-skanningar genom omformning, samtidigt som medicinskt relevant information, såsom hjärnan, bevaras. Konventionella metoder som tar bort känsliga områden (t.ex. ansikte eller öron) stör ofta efterföljande analyser genom att införa en domänförskjutning – en betydande förändring i datadistributionen. Vår metod omformar integritetskänsliga områden, vilket bibehåller både integritet och diagnostisk användbarhet samt prestanda i efterföljande uppgifter.

För den andra utmaningen utvecklar vi avancerade tekniker för att ta bort rörelseartefakter från MRT-skanningar, vilket möjliggör återhämtning av skanningar som annars skulle vara oanvändbara. Genom att integrera 3D-visionstransformatorer med självlärande och transferlärande tekniker förbättrar våra metoder bildkvaliteten samtidigt som den beräkningsmässiga belastningen minimeras. Detta minskar behovet av omtagningar, förbättrar diagnostisk noggrannhet och ökar patientkomforten genom att effektivisera MRT-processen.

Våra resultat belyser den transformativa potentialen hos generativ modellering inom medicinsk bildbehandling. Genom att hantera både integritetsrisker och borttagning av artefakter etablerar denna forskning nya standarder för säkra, effektiva och precisa diagnoser. Med den växande integrationen av AI inom hälso- och sjukvården, lägger dessa innovationer grunden för skalbara, integritetsskyddande och tillgängliga diagnostiska metoder över olika bildbehandlingsmodaliteter.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. vii, 114
Series
TRITA-EECS-AVL ; 2024:89
Keywords
Biomedical Imaging, Generative Modeling, Magnetic Resonance Imaging, De-identification, Privacy, Vision Transformers, Biomedicinsk avbildning, Generativ modellering, Magnetisk resonanstomografi, Avidentifiering, Vision Transformers
National Category
Medical Imaging
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-356604 (URN)978-91-8106-118-5 (ISBN)
Public defence
2024-12-13, https://kth-se.zoom.us/j/69355780837, F2, Lindstedtsvägen 16, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20241120

Available from: 2024-11-20 Created: 2024-11-20 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

fulltext(4773 kB)53 downloads
File information
File name FULLTEXT01.pdfFile size 4773 kBChecksum SHA-512
a06921301a15d2ff5df2dd106937941d5c2f0383603c1306d84171cf5007a3aabedfd9af76b0822b36685750f4ea4b905c4c43a25f24c54033273234b9320ed7
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Van der Goten, Lennart AlexanderGuo, JingyuSmith, Kevin

Search in DiVA

By author/editor
Van der Goten, Lennart AlexanderGuo, JingyuSmith, Kevin
By organisation
Computational Science and Technology (CST)Science for Life Laboratory, SciLifeLab
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 53 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 351 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf