kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator
Zhejiang Univ, Natl Engn Res Ctr Opt Instruments, Hangzhou 310058, Peoples R China; Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China.
Zhejiang Univ, Natl Engn Res Ctr Opt Instruments, Hangzhou 310058, Peoples R China.
Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China; Key Lab Evidence Based Radiol Taizhou, Linhai 317000, Zhejiang, Peoples R China.
Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China.
Visa övriga samt affilieringar
2025 (Engelska)Ingår i: IEEE Access, E-ISSN 2169-3536, Vol. 13, s. 72202-72220Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Computed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images without the need for contrast agent injection is highly desirable. The main contributions of this paper are as follows: 1) We designed a GAN-guided CNN-Transformer aggregation network called GCTANet for the CECT image synthesis task. We propose a CNN-Transformer Selective Fusion Module (CTSFM) to fully exploit the interaction between local and global information for CECT image synthesis. 2) We propose a two-stage training strategy. We first train a non-contrast CT (NCCT) image synthesis model to deal with the misalignment between NCCT and CECT images. Then we trained GCTANet to predict real CECT images using synthetic NCCT images. 3) A multi-scale Patch hybrid attention block (MSPHAB) was proposed to obtain enhanced feature representations. MSPHAB consists of spatial self-attention and channel self-attention in parallel. We also propose a spatial channel information interaction module (SCIM) to fully fuse the two kinds of self-attention information to obtain a strong representation ability. We evaluated GCTANet on two private datasets and one public dataset. On the neck dataset, the PSNR and SSIM achieved were 35.46 +/- 2.783 dB and 0.970 +/- 0.020 , respectively; on the abdominal dataset, 25.75 +/- 5.153 dB and 0.827 +/- 0.073 , respectively; and on the MRI-CT dataset, 29.61 +/- 1.789 dB and 0.917 +/- 0.032 , respectively. In particular, in the area around the heart, where obvious movements and disturbances were unavoidable due to the heartbeat and breathing, GCTANet still successfully synthesized high-contrast coronary arteries, demonstrating its potential for assisting in coronary artery disease diagnosis. The results demonstrate that GCTANet outperforms existing methods.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 13, s. 72202-72220
Nyckelord [en]
Image synthesis, Computed tomography, Contrast agents, Medical diagnostic imaging, Transformers, Feature extraction, Image segmentation, Generators, Generative adversarial networks, Training, Medical image synthesis, transformer, CNN, generative adversative network
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
URN: urn:nbn:se:kth:diva-363550DOI: 10.1109/ACCESS.2025.3563375ISI: 001479442900021Scopus ID: 2-s2.0-105003643963OAI: oai:DiVA.org:kth-363550DiVA, id: diva2:1959039
Anmärkning

QC 20250519

Tillgänglig från: 2025-05-19 Skapad: 2025-05-19 Senast uppdaterad: 2025-07-07Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

He, Sailing

Sök vidare i DiVA

Av författaren/redaktören
He, Sailing
Av organisationen
Elektromagnetism och fusionsfysik
I samma tidskrift
IEEE Access
Datorgrafik och datorseende

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 40 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf