kth.sePublications
Change search
Link to record
Permanent link

Direct link
Smedby, Örjan, ProfessorORCID iD iconorcid.org/0000-0002-7750-1917
Alternative names
Publications (10 of 110) Show all publications
Yang, Z., Fan, T., Smedby, Ö. & Moreno, R. (2024). 3D Breast Ultrasound Image Classification Using 2.5D Deep learning. In: 17th International Workshop on Breast Imaging, IWBI 2024: . Paper presented at 17th International Workshop on Breast Imaging, IWBI 2024, Chicago, United States of America, Jun 9 2024 - Jun 12 2024. SPIE, 13174, Article ID 131741R.
Open this publication in new window or tab >>3D Breast Ultrasound Image Classification Using 2.5D Deep learning
2024 (English)In: 17th International Workshop on Breast Imaging, IWBI 2024, SPIE , 2024, Vol. 13174, article id 131741RConference paper, Published paper (Refereed)
Abstract [en]

The 3D breast ultrasound is a radiation-free and effective imaging technology for breast tumor diagnosis. However, checking the 3D breast ultrasound is time-consuming compared to mammograms. To reduce the workload of radiologists, we proposed a 2.5D deep learning-based breast ultrasound tumor classification system. First, we used the pre-trained STU-Net to finetune and segment the tumor in 3D. Then, we fine-tuned the DenseNet-121 for classification using the 10 slices with the biggest tumoral area and their adjacent slices. The Tumor Detection, Segmentation, and Classification on Automated 3D Breast Ultrasound (TDSC-ABUS) MICCAI Challenge 2023 dataset was used to train and validate the performance of the proposed method. Compared to a 3D convolutional neural network model and radiomics, our proposed method has better performance.

Place, publisher, year, edition, pages
SPIE, 2024
Series
Proceedings of SPIE - The International Society for Optical Engineering, ISSN 0277-786X ; 13174
Keywords
2.5D, 3D Breast Ultrasound, Deep learning, Tumor Classification
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-348289 (URN)10.1117/12.3025534 (DOI)001239315300062 ()2-s2.0-85195360791 (Scopus ID)
Conference
17th International Workshop on Breast Imaging, IWBI 2024, Chicago, United States of America, Jun 9 2024 - Jun 12 2024
Note

QC 20240624

Part of ISBN 978-151068020-3

Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2024-07-05Bibliographically approved
Nilsson, T., Rasinski, P., Smedby, Ö., Af Burén, S., Sparrelid, E., Löhr, J. M., . . . Holstensson, M. (2024). Acquisition Duration Optimization Using Visual Grading Regression in [<sup>68</sup>Ga]FAPI-46 PET Imaging of Oncologic Patients. Journal of Nuclear Medicine Technology, 52(3), 221-228
Open this publication in new window or tab >>Acquisition Duration Optimization Using Visual Grading Regression in [<sup>68</sup>Ga]FAPI-46 PET Imaging of Oncologic Patients
Show others...
2024 (English)In: Journal of Nuclear Medicine Technology, ISSN 0091-4916, E-ISSN 1535-5675, Vol. 52, no 3, p. 221-228Article in journal (Refereed) Published
Abstract [en]

Fibroblast activation protein is a promising target for oncologic molecular imaging with radiolabeled fibroblast activation protein inhibitors (FAPI) in a large variety of cancers. However, there are yet no published recommendations on how to set up an optimal imaging protocol for FAPI PET/CT. It is important to optimize the acquisition duration and strive toward an acquisition that is sufficiently short while simultaneously providing sufficient image quality to ensure a reliable diagnosis. The aim of this study was to evaluate the feasibility of reducing the acquisition duration of [68Ga]FAPI-46 imaging while maintaining satisfactory image quality, with certainty that the radiologist's ability to make a clinical diagnosis would not be affected. Methods: [68Ga]FAPI-46 PET/CT imaging was performed on 10 patients scheduled for surgical resection of suspected pancreatic cancer, 60 min after administration of 3.6 ± 0.2 MBq/kg. The acquisition time was 4 min/bed position, and the raw PET data were statistically truncated and reconstructed to represent images with an acquisition duration of 1, 2, and 3 min/bed position, additional to the reference images of 4 min/bed position. Four image quality criteria that focused on the ability to distinguish specific anatomic details, as well as perceived image noise and overall image quality, were scored on a 4-point Likert scale and analyzed with mixed-effects ordinal logistic regression. Results: A trend toward increasing image quality scores with increasing acquisition duration was observed for all criteria. For the overall image quality, there was no significant difference between 3 and 4 min/bed position, whereas 1 and 2 min/bed position were rated significantly (P < 0.05) lower than 4 min/bed position. For the other criteria, all images with a reduced acquisition duration were rated significantly inferior to images obtained at 4 min/bed position. Conclusion: The acquisition duration can be reduced from 4 to 3 min/bed position while maintaining satisfactory image quality. Reducing the acquisition duration to 2 min/bed position or lower is not recommended since it results in inferior-quality images so noisy that clinical interpretation is significantly disrupted.

Place, publisher, year, edition, pages
Society of Nuclear Medicine, 2024
Keywords
acquisition duration, fibroblast activation protein, pancreas, PET, visual grading, [68Ga]FAPI-46
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-353424 (URN)10.2967/jnmt.123.267156 (DOI)38627014 (PubMedID)2-s2.0-85203474557 (Scopus ID)
Note

Imported from Scopus. VERIFY.

Available from: 2024-09-19 Created: 2024-09-19 Last updated: 2024-09-24Bibliographically approved
Klintström, E., Klintström, B., Smedby, Ö. & Moreno, R. (2024). Automated region growing-based segmentation for trabecular bone structure in fresh-frozen human wrist specimens. BMC Medical Imaging, 24(1), Article ID 101.
Open this publication in new window or tab >>Automated region growing-based segmentation for trabecular bone structure in fresh-frozen human wrist specimens
2024 (English)In: BMC Medical Imaging, E-ISSN 1471-2342, Vol. 24, no 1, article id 101Article in journal (Refereed) Published
Abstract [en]

Bone strength depends on both mineral content and bone structure. Measurements of bone microstructure on specimens can be performed by micro-CT. In vivo measurements are reliably performed by high-resolution peripheral computed tomography (HR-pQCT) using dedicated software. In previous studies from our research group, trabecular bone properties on CT data of defatted specimens from many different CT devices have been analyzed using an Automated Region Growing (ARG) algorithm-based code, showing strong correlations to micro-CT. The aim of the study was to validate the possibility of segmenting and measuring trabecular bone structure from clinical CT data of fresh-frozen human wrist specimens. Data from micro-CT was used as reference. The hypothesis was that the ARG-based in-house built software could be used for such measurements. HR-pQCT image data at two resolutions (61 and 82 µm isotropic voxels) from 23 fresh-frozen human forearms were analyzed. Correlations to micro-CT were strong, varying from 0.72 to 0.99 for all parameters except trabecular termini and nodes. The bone volume fraction had correlations varying from 0.95 to 0.98 but was overestimated compared to micro-CT, especially at the lower resolution. Trabecular separation and spacing were the most stable parameters with correlations at 0.80-0.97 and mean values in the same range as micro-CT. Results from this in vitro study show that an ARG-based software could be used for segmenting and measuring 3D trabecular bone structure from clinical CT data of fresh-frozen human wrist specimens using micro-CT data as reference. Over-and underestimation of several of the bone structure parameters must however be taken into account.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Bone Structure Analysis, Micro-CT, Segmentation, Trabecular Bone
National Category
Medical Image Processing
Identifiers
urn:nbn:se:kth:diva-346373 (URN)10.1186/s12880-024-01281-w (DOI)001220792200001 ()38693510 (PubMedID)2-s2.0-85191707549 (Scopus ID)
Note

QC 20240524

Available from: 2024-05-14 Created: 2024-05-14 Last updated: 2024-08-08Bibliographically approved
Kataria, B., Woisetschläger, M., Althén, J. N., Sandborg, M. & Smedby, Ö. (2024). Image quality in CT thorax: effect of altering reconstruction algorithm and tube load. Radiation Protection Dosimetry, 200(5), 504-514
Open this publication in new window or tab >>Image quality in CT thorax: effect of altering reconstruction algorithm and tube load
Show others...
2024 (English)In: Radiation Protection Dosimetry, ISSN 0144-8420, E-ISSN 1742-3406, Vol. 200, no 5, p. 504-514Article in journal (Refereed) Published
Abstract [en]

Non-linear properties of iterative reconstruction (IR) algorithms can alter image texture. We evaluated the effect of a model-based IR algorithm (advanced modelled iterative reconstruction; ADMIRE) and dose on computed tomography thorax image quality. Dual-source scanner data were acquired at 20, 45 and 65 reference mAs in 20 patients. Images reconstructed with filtered back projection (FBP) and ADMIRE Strengths 3–5 were assessed independently by six radiologists and analysed using an ordinal logistic regression model. For all image criteria studied, the effects of tube load 20 mAs and all ADMIRE strengths were significant (p < 0.001) when compared to reference categories 65 mAs and FBP. Increase in tube load from 45 to 65 mAs showed image quality improvement in three of six criteria. Replacing FBP with ADMIRE significantly improves perceived image quality for all criteria studied, potentially permitting a dose reduction of almost 70% without loss in image quality.

Place, publisher, year, edition, pages
Oxford University Press (OUP), 2024
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-345764 (URN)10.1093/rpd/ncae005 (DOI)001163879200001 ()38369635 (PubMedID)2-s2.0-85189673039 (Scopus ID)
Note

QC 20240419

Available from: 2024-04-18 Created: 2024-04-18 Last updated: 2024-04-19Bibliographically approved
Yang, Z., Fan, T., Smedby, Ö. & Moreno, R. (2024). Lesion Localization in Digital Breast Tomosynthesis with Deformable Transformers by Using 2.5D Information. In: Medical Imaging 2024: Computer-Aided Diagnosis: . Paper presented at Medical Imaging 2024: Computer-Aided Diagnosis, San Diego, United States of America, Feb 19 2024 - Feb 22 2024. SPIE-Intl Soc Optical Eng, Article ID 129270G.
Open this publication in new window or tab >>Lesion Localization in Digital Breast Tomosynthesis with Deformable Transformers by Using 2.5D Information
2024 (English)In: Medical Imaging 2024: Computer-Aided Diagnosis, SPIE-Intl Soc Optical Eng , 2024, article id 129270GConference paper, Published paper (Refereed)
Abstract [en]

In this study, we adapted a transformer-based method to localize lesions in digital breast tomosynthesis (DBT) images. Compared with convolutional neural network-based object detection methods, the transformer-based method does not require non-maximum suppression postprocessing. Integrated deformable convolution detection transformers can better capture small-size lesions. We added transfer learning to tackle the issue of the lack of annotated data from DBT. To validate the superiority of the transformer-based detection method, we compared the results with deep-learning object detection methods. The experimental results demonstrated that the proposed method performs better than all comparison methods.

Place, publisher, year, edition, pages
SPIE-Intl Soc Optical Eng, 2024
Keywords
Deformable Transformers, Digital Breast Tomosynthesis, Lesion Localization
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-346409 (URN)10.1117/12.3005496 (DOI)001208134600013 ()2-s2.0-85191482260 (Scopus ID)
Conference
Medical Imaging 2024: Computer-Aided Diagnosis, San Diego, United States of America, Feb 19 2024 - Feb 22 2024
Note

QC 20240521

Available from: 2024-05-14 Created: 2024-05-14 Last updated: 2024-05-21Bibliographically approved
Bendazzoli, S., Bäcklin, E., Smedby, Ö., Janerot-Sjoberg, B., Connolly, B. & Wang, C. (2024). Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation. Journal of Medical Imaging, 11(4)
Open this publication in new window or tab >>Lung vessel connectivity map as anatomical prior knowledge for deep learning-based lung lobe segmentation
Show others...
2024 (English)In: Journal of Medical Imaging, ISSN 2329-4302, E-ISSN 2329-4310, Vol. 11, no 4Article in journal (Refereed) Published
Abstract [en]

Purpose Our study investigates the potential benefits of incorporating prior anatomical knowledge into a deep learning (DL) method designed for the automated segmentation of lung lobes in chest CT scans. Approach We introduce an automated DL-based approach that leverages anatomical information from the lung's vascular system to guide and enhance the segmentation process. This involves utilizing a lung vessel connectivity (LVC) map, which encodes relevant lung vessel anatomical data. Our study explores the performance of three different neural network architectures within the nnU-Net framework: a standalone U-Net, a multitasking U-Net, and a cascade U-Net. Results Experimental findings suggest that the inclusion of LVC information in the DL model can lead to improved segmentation accuracy, particularly, in the challenging boundary regions of expiration chest CT volumes. Furthermore, our study demonstrates the potential for LVC to enhance the model's generalization capabilities. Finally, the method's robustness is evaluated through the segmentation of lung lobes in 10 cases of COVID-19, demonstrating its applicability in the presence of pulmonary diseases. Conclusions Incorporating prior anatomical information, such as LVC, into the DL model shows promise for enhancing segmentation performance, particularly in the boundary regions. However, the extent of this improvement has limitations, prompting further exploration of its practical applicability.

Place, publisher, year, edition, pages
SPIE-Intl Soc Optical Eng, 2024
Keywords
pulmonary lobe segmentation, computed tomography, deep learning, 3D segmentation
National Category
Medical Image Processing
Identifiers
urn:nbn:se:kth:diva-353003 (URN)10.1117/1.JMI.11.4.044001 (DOI)001304656700024 ()38988990 (PubMedID)2-s2.0-85202919207 (Scopus ID)
Note

QC 20240911

Available from: 2024-09-11 Created: 2024-09-11 Last updated: 2024-09-11Bibliographically approved
Bäcklin, E., Gonon, A., Sköld, M., Smedby, Ö., Breznik, E. & Janerot Sjöberg, B. (2024). Pulmonary volumes and signs of chronic airflow limitation in quantitative computed tomography. Clinical Physiology and Functional Imaging, 44(4), 340-348
Open this publication in new window or tab >>Pulmonary volumes and signs of chronic airflow limitation in quantitative computed tomography
Show others...
2024 (English)In: Clinical Physiology and Functional Imaging, ISSN 1475-0961, E-ISSN 1475-097X, Vol. 44, no 4, p. 340-348Article in journal (Refereed) Published
Abstract [en]

Background

Computed tomography (CT) offers pulmonary volumetric quantification but is not commonly used in healthy individuals due to radiation concerns. Chronic airflow limitation (CAL) is one of the diagnostic criteria for chronic obstructive pulmonary disease (COPD), where early diagnosis is important. Our aim was to present reference values for chest CT volumetric and radiodensity measurements and explore their potential in detecting early signs of CAL.

Methods

From the population-based Swedish CArdioPulmonarybioImage Study (SCAPIS), 294 participants aged 50–64, were categorized into non-CAL (n = 258) and CAL (n = 36) groups based on spirometry. From inspiratory and expiratory CT images we compared lung volumes, mean lung density (MLD), percentage of low attenuation volume (LAV%) and LAV cluster volume between groups, and against reference values from static pulmonary function test (PFT).

Results

The CAL group exhibited larger lung volumes, higher LAV%, increased LAV cluster volume and lower MLD compared to the non-CAL group. Lung volumes significantly deviated from PFT values. Expiratory measurements yielded more reliable results for identifying CAL compared to inspiratory. Using a cut-off value of 0.6 for expiratory LAV%, we achieved sensitivity, specificity and positive/negative predictive values of 72%, 85% and 40%/96%, respectively.

Conclusion

We present volumetric reference values from inspiratory and expiratory chest CT images for a middle-aged healthy cohort. These results are not directly comparable to those from PFTs. Measures of MLD and LAV can be valuable in the evaluation of suspected CAL. Further validation and refinement are necessary to demonstrate its potential as a decision support tool for early detection of COPD.

Place, publisher, year, edition, pages
Wiley, 2024
Keywords
medical image processing
National Category
Radiology, Nuclear Medicine and Medical Imaging Medical Image Processing
Research subject
Technology and Health; Medical Technology
Identifiers
urn:nbn:se:kth:diva-350134 (URN)10.1111/cpf.12880 (DOI)001196740700001 ()38576112 (PubMedID)2-s2.0-85189452440 (Scopus ID)
Funder
Swedish Heart Lung Foundation
Note

QC 20240708

Available from: 2024-07-06 Created: 2024-07-06 Last updated: 2024-08-02Bibliographically approved
Tomic, H., Yang, Z., Tingberg, A., Zackrisson, S., Moreno, R., Smedby, Ö., . . . Bakic, P. (2024). Using simulated breast lesions based on Perlin noise for evaluation of lesion segmentation. In: Medical Imaging 2024: Physics of Medical Imaging: . Paper presented at Medical Imaging 2024: Physics of Medical Imaging, San Diego, United States of America, Feb 19 2024 - Feb 22 2024. SPIE-Intl Soc Optical Eng, Article ID 129251P.
Open this publication in new window or tab >>Using simulated breast lesions based on Perlin noise for evaluation of lesion segmentation
Show others...
2024 (English)In: Medical Imaging 2024: Physics of Medical Imaging, SPIE-Intl Soc Optical Eng , 2024, article id 129251PConference paper, Published paper (Refereed)
Abstract [en]

Segmentation of diagnostic radiography images using deep learning is progressively expanding, which sets demands on the accessibility, availability, and accuracy of the software tools used. This study aimed at evaluating the performance of a segmentation model for digital breast tomosynthesis (DBT), with the use of computer-simulated breast anatomy. We have simulated breast anatomy and soft tissue breast lesions, by utilizing a model approach based on the Perlin noise algorithm. The obtained breast phantoms were projected and reconstructed into DBT slices using a publicly available open-source reconstruction method. Each lesion was then segmented using two approaches: 1. the Segment Anything Model (SAM), a publicly available AI-based method for image segmentation and 2. manually by three human observers. The lesion area in each slice was compared to the ground truth area, derived from the binary mask of the lesion model. We found similar performance between SAM and manual segmentation. Both SAM and the observers performed comparably in the central slice (mean absolute relative error compared to the ground truth and standard deviation SAM: 4 ± 3 %, observers: 3 ± 3 %). Similarly, both SAM and the observers overestimated the lesion area in the peripheral reconstructed slices (mean absolute relative error and standard deviation SAM: 277 ± 190 %, observers: 295 ± 182 %). We showed that 3D voxel phantoms can be used for evaluating different segmentation methods. In preliminary comparison, tumor segmentation in simulated DBT images using SAM open-source method showed a similar performance as manual tumor segmentation.

Place, publisher, year, edition, pages
SPIE-Intl Soc Optical Eng, 2024
Keywords
AI, Breast phantom, computer simulations and VCT, segmentation
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-347131 (URN)10.1117/12.3008840 (DOI)001223517100049 ()2-s2.0-85193540163 (Scopus ID)
Conference
Medical Imaging 2024: Physics of Medical Imaging, San Diego, United States of America, Feb 19 2024 - Feb 22 2024
Note

Part of ISBN 9781510671546

QC 20240610

Available from: 2024-06-03 Created: 2024-06-03 Last updated: 2024-06-14Bibliographically approved
Nilsson, T., Rasinski, P., af Buren, S., Smedby, Ö., Blomgren, A., Lohr, M., . . . Holstensson, M. (2023). A dose optimization study using Visual Grading Regression in [68Ga]-FAPI-46 PET imaging of patients with pancreatic lesions. European Journal of Nuclear Medicine and Molecular Imaging, 50(SUPPL 1), S493-S493
Open this publication in new window or tab >>A dose optimization study using Visual Grading Regression in [68Ga]-FAPI-46 PET imaging of patients with pancreatic lesions
Show others...
2023 (English)In: European Journal of Nuclear Medicine and Molecular Imaging, ISSN 1619-7070, E-ISSN 1619-7089, Vol. 50, no SUPPL 1, p. S493-S493Article in journal, Meeting abstract (Other academic) Published
Place, publisher, year, edition, pages
SPRINGER, 2023
National Category
Biochemistry and Molecular Biology
Identifiers
urn:nbn:se:kth:diva-340688 (URN)001084059702203 ()
Note

QC 20231211

Available from: 2023-12-11 Created: 2023-12-11 Last updated: 2023-12-11Bibliographically approved
Kataria, B., Oman, J., Sandborg, M. & Smedby, Ö. (2023). Learning effects in visual grading assessment of model-based reconstruction algorithms in abdominal Computed Tomography. EUROPEAN JOURNAL OF RADIOLOGY OPEN, 10, Article ID 100490.
Open this publication in new window or tab >>Learning effects in visual grading assessment of model-based reconstruction algorithms in abdominal Computed Tomography
2023 (English)In: EUROPEAN JOURNAL OF RADIOLOGY OPEN, ISSN 2352-0477, Vol. 10, article id 100490Article in journal (Refereed) Published
Abstract [en]

Objectives: Images reconstructed with higher strengths of iterative reconstruction algorithms may impair radi-ologists' subjective perception and diagnostic performance due to changes in the amplitude of different spatial frequencies of noise. The aim of the present study was to ascertain if radiologists can learn to adapt to the unusual appearance of images produced by higher strengths of Advanced modeled iterative reconstruction al-gorithm (ADMIRE). Methods: Two previously published studies evaluated the performance of ADMIRE in non-contrast and contrast -enhanced abdominal CT. Images from 25 (first material) and 50 (second material) patients, were reconstructed with ADMIRE strengths 3, 5 (AD3, AD5) and filtered back projection (FBP). Radiologists assessed the images using image criteria from the European guidelines for quality criteria in CT. To ascertain if there was a learning effect, new analyses of data from the two studies was performed by introducing a time variable in the mixed -effects ordinal logistic regression model. Results: In both materials, a significant negative attitude to ADMIRE 5 at the beginning of the viewing was strengthened during the progress of the reviews for both liver parenchyma (first material:-0.70, p < 0.01, second material:-0.96, p < 0.001) and overall image quality (first material:-0.59, p < 0.05, second materi-al::-1.26, p < 0.001). For ADMIRE 3, an early positive attitude for the algorithm was noted, with no significant change over time for all criteria except one (overall image quality), where a significant negative trend over time (-1.08, p < 0.001) was seen in the second material.Conclusions: With progression of reviews in both materials, an increasing dislike for ADMIRE 5 images was apparent for two image criteria. In this time perspective (weeks or months), no learning effect towards accepting the algorithm could be demonstrated.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Computed tomography, Abdominal, Image quality, Learning effect, Visual grading, Perception
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-331227 (URN)10.1016/j.ejro.2023.100490 (DOI)001008900100001 ()37207049 (PubMedID)2-s2.0-85157973747 (Scopus ID)
Note

QC 20230706

Available from: 2023-07-06 Created: 2023-07-06 Last updated: 2023-07-06Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7750-1917

Search in DiVA

Show all publications

Profile pages