kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 978) Show all publications
Yang, H. & He, S. (2025). 340 mW nanosecond compact 1.7 μm passively Q-switched laser based on a fiber-type saturable absorber with mismatch of mode-field area. Optics and Laser Technology, 184, Article ID 112511.
Open this publication in new window or tab >>340 mW nanosecond compact 1.7 μm passively Q-switched laser based on a fiber-type saturable absorber with mismatch of mode-field area
2025 (English)In: Optics and Laser Technology, ISSN 0030-3992, E-ISSN 1879-2545, Vol. 184, article id 112511Article in journal (Refereed) Published
Abstract [en]

Here, we report a passively Q-switched Tm laser at 1720 nm. The Q-switching behavior originates from a piece of Tm-doped fiber. Tm-doped fiber has a broad absorption spectrum covering 1.7 μm waveband, which can be used as a fiber-type saturable absorber for 1.7 μm pulsed lasers. In comparison with the typical Q-switching system in which the gain fiber has a different rare-earth doping from the fiber-type saturable absorber, the same rare-earth doping in the gain fiber and the saturable absorber cannot support the effective Q-switching operation. To initiate the pulsing operation in this Tm-Tm laser system, we introduce a mismatch of mode-field area between the gain fiber and the fiber-type saturable absorber. By using ∼50 cm Tm-doped fibers as the saturable absorber, the passive Q-switching 1720 nm laser is realized and produces 340 mW output power with a nanosecond pulse width (400 ns ∼422 ns). After investigating the Q-switching pulses, it is found that the evolution trend of the pulses with the pump power is not consistent with the typical passive Q-switching lasers. In order to understand this unusual Q-switching behavior, we establish a rate equation model that is coupled with the mismatch of mode-field area to give a comprehensive understanding. From the numerical simulation, a new Q-switching laser cavity consisting of dual laser resonance is proposed to guide how to realize a passive Q-switching 1.7 μm laser with a higher output power.

Place, publisher, year, edition, pages
Elsevier BV, 2025
National Category
Physical Sciences
Identifiers
urn:nbn:se:kth:diva-359670 (URN)10.1016/j.optlastec.2025.112511 (DOI)001413241400001 ()2-s2.0-85216125088 (Scopus ID)
Note

QC 20250226

Available from: 2025-02-06 Created: 2025-02-06 Last updated: 2025-02-26Bibliographically approved
Wang, H., Gong, D., Zhou, R., Liang, J., Zhang, R., Ji, W. & He, S. (2025). A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator. IEEE Access, 13, 72202-72220
Open this publication in new window or tab >>A GAN Guided NCCT to CECT Synthesis With an Advanced CNN-Transformer Aggregated Generator
Show others...
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 72202-72220Article in journal (Refereed) Published
Abstract [en]

Computed tomography (CT) is essential for diagnosing and managing various diseases, with contrast-enhanced CT (CECT) offering higher contrast images following contrast agent injection. Nevertheless, the usage of contrast agents may cause side effects. Therefore, achieving high-contrast CT images without the need for contrast agent injection is highly desirable. The main contributions of this paper are as follows: 1) We designed a GAN-guided CNN-Transformer aggregation network called GCTANet for the CECT image synthesis task. We propose a CNN-Transformer Selective Fusion Module (CTSFM) to fully exploit the interaction between local and global information for CECT image synthesis. 2) We propose a two-stage training strategy. We first train a non-contrast CT (NCCT) image synthesis model to deal with the misalignment between NCCT and CECT images. Then we trained GCTANet to predict real CECT images using synthetic NCCT images. 3) A multi-scale Patch hybrid attention block (MSPHAB) was proposed to obtain enhanced feature representations. MSPHAB consists of spatial self-attention and channel self-attention in parallel. We also propose a spatial channel information interaction module (SCIM) to fully fuse the two kinds of self-attention information to obtain a strong representation ability. We evaluated GCTANet on two private datasets and one public dataset. On the neck dataset, the PSNR and SSIM achieved were 35.46 +/- 2.783 dB and 0.970 +/- 0.020 , respectively; on the abdominal dataset, 25.75 +/- 5.153 dB and 0.827 +/- 0.073 , respectively; and on the MRI-CT dataset, 29.61 +/- 1.789 dB and 0.917 +/- 0.032 , respectively. In particular, in the area around the heart, where obvious movements and disturbances were unavoidable due to the heartbeat and breathing, GCTANet still successfully synthesized high-contrast coronary arteries, demonstrating its potential for assisting in coronary artery disease diagnosis. The results demonstrate that GCTANet outperforms existing methods.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Image synthesis, Computed tomography, Contrast agents, Medical diagnostic imaging, Transformers, Feature extraction, Image segmentation, Generators, Generative adversarial networks, Training, Medical image synthesis, transformer, CNN, generative adversative network
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-363550 (URN)10.1109/ACCESS.2025.3563375 (DOI)001479442900021 ()2-s2.0-105003643963 (Scopus ID)
Note

QC 20250519

Available from: 2025-05-19 Created: 2025-05-19 Last updated: 2025-07-07Bibliographically approved
Wu, X., Lin, Z., Tang, S., Chen, X., Guo, T. & He, S. (2025). A Multi-Resonant Tunable Fabry-Pérot Cavity for High Throughput Spectral Imaging. Advanced Optical Materials, 13(8), Article ID 2402784.
Open this publication in new window or tab >>A Multi-Resonant Tunable Fabry-Pérot Cavity for High Throughput Spectral Imaging
Show others...
2025 (English)In: Advanced Optical Materials, ISSN 2162-7568, E-ISSN 2195-1071, Vol. 13, no 8, article id 2402784Article in journal (Refereed) Published
Abstract [en]

Spectral imaging technology has gained widespread application across diverse fields due to its ability to capture spatial and spectral information simultaneously. However, conventional spectral scanning methods using single-peak tunable filters face the challenges of low optical throughput. Inspired by Fellgett's advantage in Fourier-transform infrared spectroscopy, this paper proposes a tunable filter with multiple resonances to improve optical throughput. It is composed of a simple Fabry-Pérot cavity filled with liquid crystal. An artificial neural network is employed to match with the filter for spectrum reconstruction. Experimental results show a spectral resolution of 10 nm and a switching time of ≈23 ms between adjacent states. As a demonstration, biological specimens are spectrally imaged under different light conditions with good fidelity. The results suggest that the filter possesses over six times higher optical throughput than a commercial liquid crystal tunable filter (LCTF), leading to better spectrum accuracy for spectral imaging under low-light conditions. The compact and cost-effective design of this tunable filter enables seamless integration into imaging systems, presenting promising prospects for practical applications such as portable health management and food inspection in low-light conditions.

Place, publisher, year, edition, pages
Wiley, 2025
Keywords
artificial neural network, liquid crystal, multispectral imaging, tunable filter
National Category
Atom and Molecular Physics and Optics Signal Processing
Identifiers
urn:nbn:se:kth:diva-361785 (URN)10.1002/adom.202402784 (DOI)001393208400001 ()2-s2.0-86000715204 (Scopus ID)
Note

QC 20250401

Available from: 2025-03-27 Created: 2025-03-27 Last updated: 2025-04-01Bibliographically approved
Si, Y., Lin, Z., Wang, X. & He, S. (2025). A New Hyperspectral Reconstruction Method with Conditional Diffusion Model for Snapshot Spectral Compressive Imaging. IEEE Transactions on Instrumentation and Measurement, 74, Article ID 4506214.
Open this publication in new window or tab >>A New Hyperspectral Reconstruction Method with Conditional Diffusion Model for Snapshot Spectral Compressive Imaging
2025 (English)In: IEEE Transactions on Instrumentation and Measurement, ISSN 0018-9456, E-ISSN 1557-9662, Vol. 74, article id 4506214Article in journal (Refereed) Published
Abstract [en]

In the coded aperture snapshot spectral imaging (CASSI) system, the coded and compressed single-channel measurements need to be reconstructed into hyperspectral cubes. Existing discriminative models reconstruct the spectral cube by optimizing the mean squared error (MSE) between the ground truth and the predicted image, employing peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as metrics to gauge the quality of reconstruction. However, these indicators often possess significant limitations in mimicking human visual perception and in discerning the impact of image distortions on perceived visual quality. In this article, a new model named CASSIDiff is proposed to reconstruct CASSI measurements, achieving advanced results in perceptual loss-based evaluation metrics such as learned perceptual image patch similarity (LPIPS) and Fréchet inception distance (FID). The diffusion model, which enjoys high accuracy and reliability in generative tasks, is used for the first time for the hyperspectral reconstruction task. A feature fusion mechanism based on discrete wavelet transform (DWT) is used to weaken the noise interference effect in the conditional diffusion model. Considering the interspectra similarity and long-range dependencies of hyperspectral data, the spatial-spectral attention mechanism is also introduced. Experiments show that CASSIDiff not only outperforms most existing algorithms in simulation datasets but also shows robustness to real data published and collected in our home-built CASSI system. The code and models are publicly available at: https://github.com/YifanSi/CASSIDiff.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Coded aperture snapshot spectral imaging (CASSI), conditional diffusion model, Frchet inception distance (FID), hyperspectral reconstruction, learned perceptual image patch similarity (LPIPS)
National Category
Computer graphics and computer vision Signal Processing Computer Sciences
Identifiers
urn:nbn:se:kth:diva-362703 (URN)10.1109/TIM.2025.3551465 (DOI)001457758700032 ()2-s2.0-105002391455 (Scopus ID)
Note

QC 20250520

Available from: 2025-04-23 Created: 2025-04-23 Last updated: 2025-05-20Bibliographically approved
Si, Y., Li, S. & He, S. (2025). A novel deep learning algorithm for Phaeocystis counting and density estimation based on feature reconstruction and multispectral generator. Neurocomputing, 611, Article ID 128674.
Open this publication in new window or tab >>A novel deep learning algorithm for Phaeocystis counting and density estimation based on feature reconstruction and multispectral generator
2025 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 611, article id 128674Article in journal (Refereed) Published
Abstract [en]

Phaeocystis proliferation is a primary instigator of algal blooms, commonly known as red tides, posing a significant threat to marine life and severely disrupting marine ecosystems. Currently, no effective method exists for estimating Phaeocystis density, underscoring an urgent need for preventative measures against Phaeocystis blooms. Given the challenges associated with the varying sizes and frequent overlapping of Phaeocystis colonies, we propose an innovative counting algorithm that leverages feature reconstruction and multispectral generator modules. Utilizing deep learning, our method achieves accurately real-time density estimation and prediction of Phaeocystis colonies. The algorithm operates in two stages: first, a multispectral reconstruction block is trained to function as a multispectral generator; second, spectral and spatial features are integrated to predict density and perform counting. Our approach surpasses existing algorithms in accuracy for Phaeocystis counting and demonstrates the utility of multispectral data in enhancing the neural network's ability to discern targets from their background.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Deep learning, Density map, Multispectral reconstruction, Phaeocystis counting
National Category
Neurosciences Computer Sciences
Identifiers
urn:nbn:se:kth:diva-354916 (URN)10.1016/j.neucom.2024.128674 (DOI)001333952800001 ()2-s2.0-85205565142 (Scopus ID)
Note

QC 20241029

Available from: 2024-10-16 Created: 2024-10-16 Last updated: 2024-10-29Bibliographically approved
Fang, Y. t., Bu, F. & He, S. (2025). Abnormal Unidirectional Lasing from the Combined Effect of non-Hermitian Modulated Bound States in the Continuum and Fabry–Pérot Resonance. Laser & Photonics reviews, 19(7), Article ID 2400964.
Open this publication in new window or tab >>Abnormal Unidirectional Lasing from the Combined Effect of non-Hermitian Modulated Bound States in the Continuum and Fabry–Pérot Resonance
2025 (English)In: Laser & Photonics reviews, ISSN 1863-8880, E-ISSN 1863-8899, Vol. 19, no 7, article id 2400964Article in journal (Refereed) Published
Abstract [en]

To transform bound state-in-continuum (BIC)-related unidirectional radiation into BIC-related unidirectional lasing, a 1-D grating with a parity-time (PT)-symmetry configuration is proposed. Through non-Hermitian modulation, the BIC undergoes an asymmetric split in the –k and +k spaces. Abnormal phenomena have also been observed. The asymmetric BIC makes the grating either a gain cavity or a loss cavity depending on the incident direction of the plane wave. With plane wave incidence, the grating exhibits Fano resonance with energy conservation. However, for diverging light incident from a line source or a Gaussian source, the coupling of the gain cavity and loss cavity produces a new phenomenon, unidirectional and single-mode lasing with an interesting wavefront transformation from a diverging wave to a unidirectional plane wave. The physical mechanism is explained by the joint effect of the asymmetric PT-BICs and cavity resonance.

Place, publisher, year, edition, pages
Wiley, 2025
Keywords
bound states in the continuum, cavity, PT-symmetry, unidirectional lasing
National Category
Other Physics Topics Atom and Molecular Physics and Optics
Identifiers
urn:nbn:se:kth:diva-362524 (URN)10.1002/lpor.202400964 (DOI)001389871300001 ()2-s2.0-105001871959 (Scopus ID)
Note

QC 20250422

Available from: 2025-04-16 Created: 2025-04-16 Last updated: 2025-04-22Bibliographically approved
Farooq, S., He, H., Guo, D., Feng, Y., Hang, J., Kong, D. & He, S. (2025). Advanced autism detection and visualization through XGBoost algorithm for fNIRS hemo-dynamic signals. Expert systems with applications, 275, Article ID 127013.
Open this publication in new window or tab >>Advanced autism detection and visualization through XGBoost algorithm for fNIRS hemo-dynamic signals
Show others...
2025 (English)In: Expert systems with applications, ISSN 0957-4174, E-ISSN 1873-6793, Vol. 275, article id 127013Article in journal (Refereed) Published
Abstract [en]

Early detection of Autism Spectrum Disorder (ASD), a neuro-developmental condition characterized by impairments in social communication, is critical for prompt intervention and care. Recent advancements in digital medicine have significantly enhanced the precision and efficiency of ASD diagnosis, management, and care coordination. These technologies offer considerable potential for optimizing treatment pathways and improving patient outcomes. However, challenges remain in sustaining long-term user engagement and effectively integrating these innovations into routine clinical practice for ASD management. In this study, we report the findings from a clinic-based, prospective investigation aimed at evaluating the diagnostic validity of ASD employing XGBoost algorithm. Of the 51 cases assessed, 24 were diagnosed with ASD, while 27 were identified as having developmental delay without autism. Functional near-infrared spectroscopy (fNIRS) was employed to measure resting-state hemo-dynamic fluctuations in the bilateral temporal lobes, revealing connectivity differences through computer vision and machine learning analysis. Agglomerative hierarchical clustering (AgHC) was employed to evaluate the active coupling between fluctuations of deoxygenated hemoglobin (HbR) and oxygenated hemoglobin (HbO) across specific channels, elucidating inter-channel relationships in neurovascular coupling. The algorithm, using a range of significant features, achieved robust diagnostic performance with area under receiver operating characteristic curve of 0.99 ± 1, demonstrating sensitivity of 98%, specificity of 95%, negative predictive value (NPV) of 98%, and positive predictive value (PPV) of 95%, indicating high diagnostic accuracy in distinguishing relevant cases. These findings underscore the potential of digital medicine to offer objective and scalable framework for the diagnosis of autism in real-world clinical environments.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Autism, fNIRS, Machine learning, UMAP, XGBoost algorithm
National Category
Psychiatry
Identifiers
urn:nbn:se:kth:diva-361156 (URN)10.1016/j.eswa.2025.127013 (DOI)001439979800001 ()2-s2.0-85219378347 (Scopus ID)
Note

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-12-08Bibliographically approved
Yu, Y., Yang, A., Liao, J., Evans, J., Zhang, R., Shuang, E. & He, S. (2025). Advances in Optical Techniques for Tumor Detection. Advanced Optical Materials, 13(32)
Open this publication in new window or tab >>Advances in Optical Techniques for Tumor Detection
Show others...
2025 (English)In: Advanced Optical Materials, ISSN 2162-7568, E-ISSN 2195-1071, Vol. 13, no 32Article, review/survey (Refereed) Published
Abstract [en]

The critical importance of early cancer detection in improving patient survival rates has driven substantial innovation in diagnostic methodologies. Optical detection technologies, renowned for their superior sensitivity, molecular specificity, and non-invasive or minimally invasive operational capabilities, have emerged as critical tools for tumor detection. This review systematically examines the use of optical techniques in tumor recognition and diagnosis, with particular emphasis on five technologies: Raman spectroscopy, fluorescence sensors, fiber-optic biosensors, photoacoustic imaging systems, and colorimetric detection platforms. The methods for the direct detection of tumor cells and tumor tissue are first reviewed. Then the contents of some indirect tumor detection methods (through, e.g., cell-derived components, peripheral microenvironment, and molecular biomarkers such as proteins or nucleic acids) are broaden. The review evaluates recent technological progress, identifies key challenges and clinical barriers, and aims to guide future research toward the development of next generation optical platforms for accurate and early cancer diagnosis.

Place, publisher, year, edition, pages
Wiley, 2025
Keywords
colorimetric sensors, fluorescence sensors, optical technologies, Raman spectroscopy, tumor detection
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:kth:diva-373132 (URN)10.1002/adom.202501343 (DOI)001603568000001 ()2-s2.0-105020599579 (Scopus ID)
Note

QC 20251120

Available from: 2025-11-20 Created: 2025-11-20 Last updated: 2025-11-20Bibliographically approved
Si, Y., Li, S., Wang, X. & He, S. (2025). ASP-Model: An Advanced Deep Learning Framework to Reconstruct Hyperspectral Cubes for Computed Tomography Imaging System. IEEE Transactions on Instrumentation and Measurement, 74, Article ID 5008710.
Open this publication in new window or tab >>ASP-Model: An Advanced Deep Learning Framework to Reconstruct Hyperspectral Cubes for Computed Tomography Imaging System
2025 (English)In: IEEE Transactions on Instrumentation and Measurement, ISSN 0018-9456, E-ISSN 1557-9662, Vol. 74, article id 5008710Article in journal (Refereed) Published
Abstract [en]

Computed tomography imaging spectrometry (CTIS) is a snapshot hyperspectral imaging (HSI) technique capable of capturing projections of the target scene from multiple wavelengths in one single exposure. The CTIS inversion problem is usually very challenging, and solving it from a single snapshot measurement often requires time-consuming iterative algorithms. And most deep learning-based algorithms in computational imaging need the priori of many samples, which brings a heavy data collection burden. In this article, to reconstruct hyperspectral cubes from CTIS measurements in an efficient way, we introduce a new CITS framework named ASP-Model based on the angular spectrum propagation theory to model the forward CITS process and efficiently reconstruct hyperspectral. Specifically, our method acquires simulation data using angular spectrum propagation for training and reconstructs real data captured by our custom-built CTIS system during inference. This framework allows us to eliminate the need to acquire extensive real data for network training. Moreover, the proposed network can reconstruct 26 spectral channels from one single measurement and demonstrates state-of-the-art results over existing reconstruction algorithms both in simulation and experimental results. We also release a new dataset containing simulated and real CTIS data for public comparison. The code and dataset are publicly available at https://github.com/YifanSi/ASP_Model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Angular spectrum propagation, computed tomography imaging spectrometry (CTIS), deep learning, hyperspectral reconstruction, point spread function (PSF)
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-360889 (URN)10.1109/TIM.2025.3540121 (DOI)001506282700023 ()2-s2.0-85218482161 (Scopus ID)
Note

QC 20250306

Available from: 2025-03-05 Created: 2025-03-05 Last updated: 2025-09-08Bibliographically approved
Si, Y. & He, S. (2025). CTISNeRF: Efficient Four-Dimensional Hyperspectral Scene Rendering and Generation with Computed Tomography Imaging Spectrometer. IEEE Sensors Journal, 25(13), 24535-24547
Open this publication in new window or tab >>CTISNeRF: Efficient Four-Dimensional Hyperspectral Scene Rendering and Generation with Computed Tomography Imaging Spectrometer
2025 (English)In: IEEE Sensors Journal, ISSN 1530-437X, E-ISSN 1558-1748, Vol. 25, no 13, p. 24535-24547Article in journal (Refereed) Published
Abstract [en]

Hyperspectral data, renowned for its capacity to provide comprehensive spectral details, is widely applied in a range of low-level and high-level tasks in remote sensing and computer vision. In this paper, we introduce an algorithm that, for the first time, leverages snapshot spectral imaging technology to generate four-dimensional hyperspectral-spatial data, named CTISNeRF. This advancement is made possible through the use of a Computed Tomography Imaging Spectrometer (CTIS), a cutting-edge sensor technology capable of capturing high-resolution spectral and spatial information in a single snapshot. In addition, a cutting-edge 360-degree panoramic hyperspectral dataset has been created and made publicly available. Our approach utilizes data from the CTIS sensor and a zeroth-order feature-sharing mechanism to adeptly learn spectral and spatial characteristics from diverse scenes. This enables the rendering of high-fidelity spectral cubes for novel views, significantly enhancing the quality and detail of hyperspectral imaging. Extensive experimental outcomes demonstrate that CTISNeRF not only markedly reduces the expenses associated with data collection but also achieves superior image quality. It reaches state-of-the-art standards in metrics such as PSNR, SSIM, and LPIPS. Furthermore, CTISNeRF maintains a more stable generation capability even when the number of training samples is reduced, showcasing its robustness and efficiency. The associated dataset and our algorithm will be publicly accessible at the following repository: https://github.com/YifanSi/CTISNeRF.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Computed Tomography Imaging Spectrometer, Hyperspectral Imaging, Implicit Neural Representation
National Category
Computer graphics and computer vision Computer Sciences Atom and Molecular Physics and Optics
Identifiers
urn:nbn:se:kth:diva-366004 (URN)10.1109/JSEN.2025.3574423 (DOI)001523483100005 ()2-s2.0-105007434429 (Scopus ID)
Note

QC 20250704

Available from: 2025-07-04 Created: 2025-07-04 Last updated: 2025-10-06Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-3401-1125

Search in DiVA

Show all publications