kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
Link to record
Permanent link

Direct link
Smedby, Örjan, ProfessorORCID iD iconorcid.org/0000-0002-7750-1917
Alternative names
Publications (10 of 119) Show all publications
Xu, J., Gao, J., Jiang, S., Wang, C., Smedby, Ö., Wu, Y., . . . Chen, X. (2025). Automatic Segmentation of Bone Graft in Maxillary Sinus via Distance Constrained Network Guided by Prior Anatomical Knowledge. IEEE journal of biomedical and health informatics, 29(3), 1995-2005
Open this publication in new window or tab >>Automatic Segmentation of Bone Graft in Maxillary Sinus via Distance Constrained Network Guided by Prior Anatomical Knowledge
Show others...
2025 (English)In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 29, no 3, p. 1995-2005Article in journal (Refereed) Published
Abstract [en]

Maxillary Sinus Lifting is a crucial surgical procedure for addressing insufficient alveolar bone mass andsevere resorption in dental implant therapy. To accurately analyze the geometry changesof the bone graft (BG) in the maxillary sinus (MS), it is essential to perform quantitative analysis. However, automated BG segmentation remains a major challenge due to the complex local appearance, including blurred boundaries, lesion interference, implant and artifact interference, and BG exceeding the MS. Currently, there are few tools available that can efficiently and accurately segment BG from cone beam computed tomography (CBCT) image. In this paper, we propose a distance-constrained attention network guided by prior anatomical knowledge for the automatic segmentation of BG. First, a guidance strategy of preoperative prior anatomical knowledge is added to a deep neural network (DNN), which improves its ability to identify the dividing line between the MS and BG. Next, a coordinate attention gate is proposed, which utilizes the synergy of channel and position attention to highlight salient features from the skip connections. Additionally, the geodesic distance constraint is introduced into the DNN to form multi-task predictions, which reduces the deviation of the segmentation result. In the test experiment, the proposed DNN achieved a Dice similarity coefficient of 85.48 +/- 6.38%, an average surface distance error is 0.57 +/- 0.34mm, and a 95% Hausdorff distance of 2.64 +/- 2.09mm, which is superior to the comparison networks. It markedly improves the segmentation accuracy and efficiency of BG and has potential applications in analyzing its volume change and absorption rate in the future.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Bones, Image segmentation, Implants, Knowledge engineering, Accuracy, Teeth, Interference, Dentistry, Surgery, Logic gates, Bone graft segmentation, prior anatomical knowledge, geodesic distance constraint, coordinate attention gate, oral and maxillofacial surgery
National Category
Medical Genetics and Genomics
Identifiers
urn:nbn:se:kth:diva-361566 (URN)10.1109/JBHI.2024.3505262 (DOI)001439576100024 ()40030351 (PubMedID)2-s2.0-85210528881 (Scopus ID)
Note

QC 20250324

Available from: 2025-03-24 Created: 2025-03-24 Last updated: 2025-03-24Bibliographically approved
Fu, J., Ferreira, D., Smedby, Ö. & Moreno, R. (2025). Decomposing the effect of normal aging and Alzheimer's disease in brain morphological changes via learned aging templates. Scientific Reports, 15(1), Article ID 11813.
Open this publication in new window or tab >>Decomposing the effect of normal aging and Alzheimer's disease in brain morphological changes via learned aging templates
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, no 1, article id 11813Article in journal (Refereed) Published
Abstract [en]

Alzheimer's disease (AD) subjects usually show more profound morphological changes with time compared to cognitively normal (CN) individuals. These changes are the combination of two major biological processes: normal aging and AD pathology. Investigating normal aging and residual morphological changes separately can increase our understanding of the disease. This paper proposes two scores, the aging score (AS) and the AD-specific score (ADS), whose purpose is to measure these two components of brain atrophy independently. For this, in the first step, we estimate the atrophy due to the normal aging of CN subjects by computing the expected deformation required to match imaging templates generated at different ages. We used a state-of-the-art generative deep learning model for generating such imaging templates. In the second step, we apply deep learning-based diffeomorphic registration to align the given image of a subject with a reference imaging template. Parametrization of this deformation field is then decomposed voxel-wise into their parallel and perpendicular components with respect to the parametrization of the expected atrophy of CN individuals in one year computed in the first step. AS and ADS are the normalized scores of these two components, respectively. We evaluated these two scores on the OASIS-3 dataset with 1,014 T1-weighted MRI scans. Of these, 326 scans were from CN subjects, and 688 scans were from subjects diagnosed with AD at various stages of clinical severity, as defined by clinical dementia rating (CDR) scores. Our results reveal that AD is marked by both disease-specific brain changes and an accelerated aging process. Such changes affect brain regions differently. Moreover, the proposed scores were sensitive to detect changes in the early stages of the disease, which is promising for its potential future use in clinical studies. Our code is freely available at https://github.com/Fjr9516/DBM_with_DL.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Normal aging, Alzheimer's disease, Deformation-based morphometry, Aging score, AD-specific score
National Category
Neurology
Identifiers
urn:nbn:se:kth:diva-363622 (URN)10.1038/s41598-025-96234-w (DOI)001460175000006 ()40189702 (PubMedID)2-s2.0-105003217252 (Scopus ID)
Note

QC 20250520

Available from: 2025-05-20 Created: 2025-05-20 Last updated: 2025-05-20Bibliographically approved
Yang, Z., Astaraki, M., Smedby, Ö. & Moreno, R. (2025). Efficient Generation of Synthetic Breast CT Slices By Combining Generative and Super-Resolution Models. In: Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - 1st Deep Breast Workshop, Deep-Breath 2024, Held in Conjunction with MICCAI 2024, Proceedings: . Paper presented at 1st Deep Breast Workshop on AI and Imaging for Diagnostic and Treatment Challenges in Breast Care, Deep-Breath 2024, Marrakesh, Morocco, Oct 10 2024 - Oct 10 2024 (pp. 65-74). Springer Nature
Open this publication in new window or tab >>Efficient Generation of Synthetic Breast CT Slices By Combining Generative and Super-Resolution Models
2025 (English)In: Artificial Intelligence and Imaging for Diagnostic and Treatment Challenges in Breast Care - 1st Deep Breast Workshop, Deep-Breath 2024, Held in Conjunction with MICCAI 2024, Proceedings, Springer Nature , 2025, p. 65-74Conference paper, Published paper (Refereed)
Abstract [en]

High-quality synthetic medical images can enlarge training datasets in different deep learning-based applications. Recently, diffusion-based methods for image synthesis have outperformed GAN-based methods, even for medical images. Unfortunately, using diffusion models is costly in terms of training time and computational resources. We propose a two-stage method that combines diffusion models and GANs to tackle this problem. First, we use diffusion models or GANs to generate low-resolution images. Then, we use a GAN-based super-resolution model to interpolate high-resolution images from these low-resolution images. Experimental results on synthetic breast CT slices show that the proposed framework is more efficient and performs better than state-of-the-art methods that generate the images in a single step. The proposed methods will be available at https://github.com/xiaoerlaigeid/Image-Frequency-Score.git.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Diffusion Model, Frequency Information, Generative Adversarial Network, Medical Image Generation, Super-Resolution
National Category
Medical Imaging Signal Processing Computer graphics and computer vision Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-361151 (URN)10.1007/978-3-031-77789-9_7 (DOI)001544124300007 ()2-s2.0-85219213535 (Scopus ID)
Conference
1st Deep Breast Workshop on AI and Imaging for Diagnostic and Treatment Challenges in Breast Care, Deep-Breath 2024, Marrakesh, Morocco, Oct 10 2024 - Oct 10 2024
Note

Part of ISBN 9783031777882

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-12-08Bibliographically approved
Yang, Z., Xiao, Y., Öktem, O., Smedby, Ö. & Moreno, R. (2025). Two-Stage Convolutional Neural Network for Breast CT Reconstruction. In: Medical Imaging 2025: Physics of Medical Imaging: . Paper presented at Medical Imaging 2025: Physics of Medical Imaging, San Diego, United States of America, Feb 17 2025 - Feb 21 2025. SPIE-Intl Soc Optical Eng, Article ID 1340544.
Open this publication in new window or tab >>Two-Stage Convolutional Neural Network for Breast CT Reconstruction
Show others...
2025 (English)In: Medical Imaging 2025: Physics of Medical Imaging, SPIE-Intl Soc Optical Eng , 2025, article id 1340544Conference paper, Published paper (Refereed)
Abstract [en]

In this study, we propose a deep learning based two-stage breast CT reconstruction in the image domain. Unlike most methods, we use two separate models to improve the Breast CT image quality. In the first stage, a deep learning-based denoiser was used to remove the noise. In the second stage, a deep learning based image enhancement model is used to improve the image quality. We evaluated the proposed method on the AAPM 2021 sparse view CT reconstruction challenge dataset.1 The experimental results demonstrate that the proposed method performs better than all comparison methods.

Place, publisher, year, edition, pages
SPIE-Intl Soc Optical Eng, 2025
Keywords
Breast CT, Image Denoise, Image Enhancement, Sparse-view CT reconstruction, Two stage method
National Category
Computer graphics and computer vision Medical Imaging Signal Processing
Identifiers
urn:nbn:se:kth:diva-363749 (URN)10.1117/12.3048825 (DOI)001487074500128 ()2-s2.0-105004584141 (Scopus ID)
Conference
Medical Imaging 2025: Physics of Medical Imaging, San Diego, United States of America, Feb 17 2025 - Feb 21 2025
Note

 Part of ISBN 9781510685888

QC 20250523

Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-07-04Bibliographically approved
Yang, Z., Fan, T., Smedby, Ö. & Moreno, R. (2024). 3D Breast Ultrasound Image Classification Using 2.5D Deep learning. In: 17th International Workshop on Breast Imaging, IWBI 2024: . Paper presented at 17th International Workshop on Breast Imaging, IWBI 2024, Chicago, United States of America, Jun 9 2024 - Jun 12 2024. SPIE, 13174, Article ID 131741R.
Open this publication in new window or tab >>3D Breast Ultrasound Image Classification Using 2.5D Deep learning
2024 (English)In: 17th International Workshop on Breast Imaging, IWBI 2024, SPIE , 2024, Vol. 13174, article id 131741RConference paper, Published paper (Refereed)
Abstract [en]

The 3D breast ultrasound is a radiation-free and effective imaging technology for breast tumor diagnosis. However, checking the 3D breast ultrasound is time-consuming compared to mammograms. To reduce the workload of radiologists, we proposed a 2.5D deep learning-based breast ultrasound tumor classification system. First, we used the pre-trained STU-Net to finetune and segment the tumor in 3D. Then, we fine-tuned the DenseNet-121 for classification using the 10 slices with the biggest tumoral area and their adjacent slices. The Tumor Detection, Segmentation, and Classification on Automated 3D Breast Ultrasound (TDSC-ABUS) MICCAI Challenge 2023 dataset was used to train and validate the performance of the proposed method. Compared to a 3D convolutional neural network model and radiomics, our proposed method has better performance.

Place, publisher, year, edition, pages
SPIE, 2024
Series
Proceedings of SPIE - The International Society for Optical Engineering, ISSN 0277-786X ; 13174
Keywords
2.5D, 3D Breast Ultrasound, Deep learning, Tumor Classification
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-348289 (URN)10.1117/12.3025534 (DOI)001239315300062 ()2-s2.0-85195360791 (Scopus ID)
Conference
17th International Workshop on Breast Imaging, IWBI 2024, Chicago, United States of America, Jun 9 2024 - Jun 12 2024
Note

QC 20240624

Part of ISBN 978-151068020-3

Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2024-07-05Bibliographically approved
Nilsson, T., Rasinski, P., Smedby, Ö., Af Burén, S., Sparrelid, E., Löhr, J. M., . . . Holstensson, M. (2024). Acquisition Duration Optimization Using Visual Grading Regression in [68Ga]FAPI-46 PET Imaging of Oncologic Patients. Journal of Nuclear Medicine Technology, 52(3), 221-228
Open this publication in new window or tab >>Acquisition Duration Optimization Using Visual Grading Regression in [68Ga]FAPI-46 PET Imaging of Oncologic Patients
Show others...
2024 (English)In: Journal of Nuclear Medicine Technology, ISSN 0091-4916, E-ISSN 1535-5675, Vol. 52, no 3, p. 221-228Article in journal (Refereed) Published
Abstract [en]

Fibroblast activation protein is a promising target for oncologic molecular imaging with radiolabeled fibroblast activation protein inhibitors (FAPI) in a large variety of cancers. However, there are yet no published recommendations on how to set up an optimal imaging protocol for FAPI PET/CT. It is important to optimize the acquisition duration and strive toward an acquisition that is sufficiently short while simultaneously providing sufficient image quality to ensure a reliable diagnosis. The aim of this study was to evaluate the feasibility of reducing the acquisition duration of [68Ga]FAPI-46 imaging while maintaining satisfactory image quality, with certainty that the radiologist's ability to make a clinical diagnosis would not be affected. Methods: [68Ga]FAPI-46 PET/CT imaging was performed on 10 patients scheduled for surgical resection of suspected pancreatic cancer, 60 min after administration of 3.6 ± 0.2 MBq/kg. The acquisition time was 4 min/bed position, and the raw PET data were statistically truncated and reconstructed to represent images with an acquisition duration of 1, 2, and 3 min/bed position, additional to the reference images of 4 min/bed position. Four image quality criteria that focused on the ability to distinguish specific anatomic details, as well as perceived image noise and overall image quality, were scored on a 4-point Likert scale and analyzed with mixed-effects ordinal logistic regression. Results: A trend toward increasing image quality scores with increasing acquisition duration was observed for all criteria. For the overall image quality, there was no significant difference between 3 and 4 min/bed position, whereas 1 and 2 min/bed position were rated significantly (P < 0.05) lower than 4 min/bed position. For the other criteria, all images with a reduced acquisition duration were rated significantly inferior to images obtained at 4 min/bed position. Conclusion: The acquisition duration can be reduced from 4 to 3 min/bed position while maintaining satisfactory image quality. Reducing the acquisition duration to 2 min/bed position or lower is not recommended since it results in inferior-quality images so noisy that clinical interpretation is significantly disrupted.

Place, publisher, year, edition, pages
Society of Nuclear Medicine, 2024
Keywords
acquisition duration, fibroblast activation protein, pancreas, PET, visual grading, [68Ga]FAPI-46
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-353424 (URN)10.2967/jnmt.123.267156 (DOI)001334663900010 ()38627014 (PubMedID)2-s2.0-85203474557 (Scopus ID)
Note

QC 20241030

Available from: 2024-09-19 Created: 2024-09-19 Last updated: 2024-10-30Bibliographically approved
Klintström, E., Klintström, B., Smedby, Ö. & Moreno, R. (2024). Automated region growing-based segmentation for trabecular bone structure in fresh-frozen human wrist specimens. BMC Medical Imaging, 24(1), Article ID 101.
Open this publication in new window or tab >>Automated region growing-based segmentation for trabecular bone structure in fresh-frozen human wrist specimens
2024 (English)In: BMC Medical Imaging, E-ISSN 1471-2342, Vol. 24, no 1, article id 101Article in journal (Refereed) Published
Abstract [en]

Bone strength depends on both mineral content and bone structure. Measurements of bone microstructure on specimens can be performed by micro-CT. In vivo measurements are reliably performed by high-resolution peripheral computed tomography (HR-pQCT) using dedicated software. In previous studies from our research group, trabecular bone properties on CT data of defatted specimens from many different CT devices have been analyzed using an Automated Region Growing (ARG) algorithm-based code, showing strong correlations to micro-CT. The aim of the study was to validate the possibility of segmenting and measuring trabecular bone structure from clinical CT data of fresh-frozen human wrist specimens. Data from micro-CT was used as reference. The hypothesis was that the ARG-based in-house built software could be used for such measurements. HR-pQCT image data at two resolutions (61 and 82 µm isotropic voxels) from 23 fresh-frozen human forearms were analyzed. Correlations to micro-CT were strong, varying from 0.72 to 0.99 for all parameters except trabecular termini and nodes. The bone volume fraction had correlations varying from 0.95 to 0.98 but was overestimated compared to micro-CT, especially at the lower resolution. Trabecular separation and spacing were the most stable parameters with correlations at 0.80-0.97 and mean values in the same range as micro-CT. Results from this in vitro study show that an ARG-based software could be used for segmenting and measuring 3D trabecular bone structure from clinical CT data of fresh-frozen human wrist specimens using micro-CT data as reference. Over-and underestimation of several of the bone structure parameters must however be taken into account.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Bone Structure Analysis, Micro-CT, Segmentation, Trabecular Bone
National Category
Medical Imaging
Identifiers
urn:nbn:se:kth:diva-346373 (URN)10.1186/s12880-024-01281-w (DOI)001220792200001 ()38693510 (PubMedID)2-s2.0-85191707549 (Scopus ID)
Note

QC 20240524

Available from: 2024-05-14 Created: 2024-05-14 Last updated: 2025-02-09Bibliographically approved
Kataria, B., Woisetschlager, M., Althen, J. N., Sandborg, M. & Smedby, Ö. (2024). Image quality assessments in abdominal CT: Relative importance of dose, iterative reconstruction strength and slice thickness. Radiography, 30(6), 1563-1571
Open this publication in new window or tab >>Image quality assessments in abdominal CT: Relative importance of dose, iterative reconstruction strength and slice thickness
Show others...
2024 (English)In: Radiography, ISSN 1078-8174, E-ISSN 1532-2831, Vol. 30, no 6, p. 1563-1571Article in journal (Refereed) Published
Abstract [en]

Introduction: Low contrast resolution in abdominal computed tomography (CT) may be negatively affected by attempts to lower patient doses. Iterative reconstruction (IR) algorithms play a key role in mitigating this problem. The reconstructed slice thickness also influences image quality. The aim was to assess the interaction and influence of patient dose, slice thickness, and IR strength on image quality in abdominal CT. Method: With a simultaneous acquisition, images at 42 and 98 mAs were obtained in 25 patients. Multiplanar images with slice thicknesses of 1, 2, and 3 mm and advanced modeled iterative reconstruction (ADMIRE) strengths of 3 (AD3) and 5 (AD5) were reconstructed. Four radiologists evaluated the images in a pairwise manner based on five image criteria. Ordinal logistic regression with mixed effects was used to evaluate the effect of tube load, ADMIRE strength, and slice thickness using the visual grading regression technique. Results: For all assessed image criteria, the regression analysis showed significantly (p < 0.001) higher image quality for AD5, but lower for tube load 42 mAs, and slice thicknesses of 1 mm and 2 mm, compared to the reference categories of AD3, 98 mAs, and 3 mm, respectively. AD5 at 2 mm was superior to AD3 at 3 mm for all image criteria studied. AD5 1 mm produced inferior image quality for liver parenchyma and overall image quality compared to AD3 3 mm. Interobserver agreement (ICC) ranged from 0.874 to 0.920. Conclusion: ADMIRE 5 at 2 mm slice thickness may allow for further dose reductions due to its superiority when compared to ADMIRE 3 at 3 mm slice thickness. Implications for practice: Combination of thinner slices and higher ADMIRE strength facilitates imaging at low dose.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Visual grading regression (VGR), Image quality, Slice thickness, Iterative reconstruction, Dose reduction
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-355806 (URN)10.1016/j.radi.2024.09.060 (DOI)001336392500001 ()39378665 (PubMedID)2-s2.0-85205933015 (Scopus ID)
Note

QC 20241104

Available from: 2024-11-04 Created: 2024-11-04 Last updated: 2024-11-04Bibliographically approved
Kataria, B., Woisetschläger, M., Althén, J. N., Sandborg, M. & Smedby, Ö. (2024). Image quality in CT thorax: effect of altering reconstruction algorithm and tube load. Radiation Protection Dosimetry, 200(5), 504-514
Open this publication in new window or tab >>Image quality in CT thorax: effect of altering reconstruction algorithm and tube load
Show others...
2024 (English)In: Radiation Protection Dosimetry, ISSN 0144-8420, E-ISSN 1742-3406, Vol. 200, no 5, p. 504-514Article in journal (Refereed) Published
Abstract [en]

Non-linear properties of iterative reconstruction (IR) algorithms can alter image texture. We evaluated the effect of a model-based IR algorithm (advanced modelled iterative reconstruction; ADMIRE) and dose on computed tomography thorax image quality. Dual-source scanner data were acquired at 20, 45 and 65 reference mAs in 20 patients. Images reconstructed with filtered back projection (FBP) and ADMIRE Strengths 3–5 were assessed independently by six radiologists and analysed using an ordinal logistic regression model. For all image criteria studied, the effects of tube load 20 mAs and all ADMIRE strengths were significant (p < 0.001) when compared to reference categories 65 mAs and FBP. Increase in tube load from 45 to 65 mAs showed image quality improvement in three of six criteria. Replacing FBP with ADMIRE significantly improves perceived image quality for all criteria studied, potentially permitting a dose reduction of almost 70% without loss in image quality.

Place, publisher, year, edition, pages
Oxford University Press (OUP), 2024
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-345764 (URN)10.1093/rpd/ncae005 (DOI)001163879200001 ()38369635 (PubMedID)2-s2.0-85189673039 (Scopus ID)
Note

QC 20240419

Available from: 2024-04-18 Created: 2024-04-18 Last updated: 2024-04-19Bibliographically approved
Yang, Z., Fan, T., Smedby, Ö. & Moreno, R. (2024). Lesion Localization in Digital Breast Tomosynthesis with Deformable Transformers by Using 2.5D Information. In: Medical Imaging 2024: Computer-Aided Diagnosis: . Paper presented at Medical Imaging 2024: Computer-Aided Diagnosis, San Diego, United States of America, Feb 19 2024 - Feb 22 2024. SPIE-Intl Soc Optical Eng, Article ID 129270G.
Open this publication in new window or tab >>Lesion Localization in Digital Breast Tomosynthesis with Deformable Transformers by Using 2.5D Information
2024 (English)In: Medical Imaging 2024: Computer-Aided Diagnosis, SPIE-Intl Soc Optical Eng , 2024, article id 129270GConference paper, Published paper (Refereed)
Abstract [en]

In this study, we adapted a transformer-based method to localize lesions in digital breast tomosynthesis (DBT) images. Compared with convolutional neural network-based object detection methods, the transformer-based method does not require non-maximum suppression postprocessing. Integrated deformable convolution detection transformers can better capture small-size lesions. We added transfer learning to tackle the issue of the lack of annotated data from DBT. To validate the superiority of the transformer-based detection method, we compared the results with deep-learning object detection methods. The experimental results demonstrated that the proposed method performs better than all comparison methods.

Place, publisher, year, edition, pages
SPIE-Intl Soc Optical Eng, 2024
Keywords
Deformable Transformers, Digital Breast Tomosynthesis, Lesion Localization
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-346409 (URN)10.1117/12.3005496 (DOI)001208134600013 ()2-s2.0-85191482260 (Scopus ID)
Conference
Medical Imaging 2024: Computer-Aided Diagnosis, San Diego, United States of America, Feb 19 2024 - Feb 22 2024
Note

QC 20240521

Available from: 2024-05-14 Created: 2024-05-14 Last updated: 2024-05-21Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7750-1917

Search in DiVA

Show all publications

Profile pages