kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator
Zhejiang Univ, Natl Engn Res Ctr Opt Instruments, Hangzhou 310058, Peoples R China; Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China.
Zhejiang Univ, Natl Engn Res Ctr Opt Instruments, Hangzhou 310058, Peoples R China.
Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China; Key Lab Evidence Based Radiol Taizhou, Linhai 317000, Zhejiang, Peoples R China.
Zhejiang Univ, Taizhou Hosp, Linhai 317000, Peoples R China.
Show others and affiliations
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 72202-72220Article in journal (Refereed) Published
Abstract [en]

Computed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images without the need for contrast agent injection is highly desirable. The main contributions of this paper are as follows: 1) We designed a GAN-guided CNN-Transformer aggregation network called GCTANet for the CECT image synthesis task. We propose a CNN-Transformer Selective Fusion Module (CTSFM) to fully exploit the interaction between local and global information for CECT image synthesis. 2) We propose a two-stage training strategy. We first train a non-contrast CT (NCCT) image synthesis model to deal with the misalignment between NCCT and CECT images. Then we trained GCTANet to predict real CECT images using synthetic NCCT images. 3) A multi-scale Patch hybrid attention block (MSPHAB) was proposed to obtain enhanced feature representations. MSPHAB consists of spatial self-attention and channel self-attention in parallel. We also propose a spatial channel information interaction module (SCIM) to fully fuse the two kinds of self-attention information to obtain a strong representation ability. We evaluated GCTANet on two private datasets and one public dataset. On the neck dataset, the PSNR and SSIM achieved were 35.46 +/- 2.783 dB and 0.970 +/- 0.020 , respectively; on the abdominal dataset, 25.75 +/- 5.153 dB and 0.827 +/- 0.073 , respectively; and on the MRI-CT dataset, 29.61 +/- 1.789 dB and 0.917 +/- 0.032 , respectively. In particular, in the area around the heart, where obvious movements and disturbances were unavoidable due to the heartbeat and breathing, GCTANet still successfully synthesized high-contrast coronary arteries, demonstrating its potential for assisting in coronary artery disease diagnosis. The results demonstrate that GCTANet outperforms existing methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025. Vol. 13, p. 72202-72220
Keywords [en]
Image synthesis, Computed tomography, Contrast agents, Medical diagnostic imaging, Transformers, Feature extraction, Image segmentation, Generators, Generative adversarial networks, Training, Medical image synthesis, transformer, CNN, generative adversative network
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-363550DOI: 10.1109/ACCESS.2025.3563375ISI: 001479442900021Scopus ID: 2-s2.0-105003643963OAI: oai:DiVA.org:kth-363550DiVA, id: diva2:1959039
Note

QC 20250519

Available from: 2025-05-19 Created: 2025-05-19 Last updated: 2025-05-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

He, Sailing

Search in DiVA

By author/editor
He, Sailing
By organisation
Electromagnetic Engineering and Fusion Science
In the same journal
IEEE Access
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 34 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf