Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 27) Show all publications
Mahbod, A., Schaefer, G., Ellinger, I., Ecker, R., Smedby, Ö. & Wang, C. (2019). A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues. In: Constantino Carlos Reyes-Aldasoro, Andrew Janowczyk, Mitko Veta, Peter Bankhead, Korsuk Sirinukunwattana (Ed.), Digital Pathology: 15th European Congress, ECDP 2019, Warwick, UK, April 10–13, 2019, Proceedings. Paper presented at 15th European Congress on Digital Pathology, ECDP 2019, Warwick, United Kingdom 10-13 April 2019 (pp. 75-82). Springer Verlag
Open this publication in new window or tab >>A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues
Show others...
2019 (English)In: Digital Pathology: 15th European Congress, ECDP 2019, Warwick, UK, April 10–13, 2019, Proceedings / [ed] Constantino Carlos Reyes-Aldasoro, Andrew Janowczyk, Mitko Veta, Peter Bankhead, Korsuk Sirinukunwattana, Springer Verlag , 2019, p. 75-82Conference paper, Published paper (Refereed)
Abstract [en]

Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
Deep learning, Digital pathology, Nuclei segmentation, Tissue analysis, U-Net, Machine learning, Pathology, Semantics, Tissue, Digital pathologies, Learning Based Models, Segmentation methods, Segmentation performance, Semantic segmentation, State-of-the-art algorithms, Image segmentation
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-262448 (URN)10.1007/978-3-030-23937-4_9 (DOI)2-s2.0-85069146581 (Scopus ID)9783030239367 (ISBN)
Conference
15th European Congress on Digital Pathology, ECDP 2019, Warwick, United Kingdom 10-13 April 2019
Note

QC 20191021

Available from: 2019-10-21 Created: 2019-10-21 Last updated: 2019-10-21Bibliographically approved
Qin, C., Cao, Z., Fan, S., Wu, Y., Sun, Y., Politis, C., . . . Chen, X. (2019). An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points. International Journal of Computer Assisted Radiology and Surgery, 14(2), 281-289
Open this publication in new window or tab >>An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points
Show others...
2019 (English)In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, E-ISSN 1861-6429, Vol. 14, no 2, p. 281-289Article in journal (Refereed) Published
Abstract [en]

PurposeSurgical navigation system (SNS) has been an important tool in surgery. However, the complicated and tedious manual selection of fiducial points on preoperative images for registration affects operational efficiency to large extent. In this study, an oral and maxillofacial navigation system named BeiDou-SNS with automatic identification of fiducial points was developed and demonstrated.MethodsTo solve the fiducial selection problem, a novel method of automatic localization for titanium screw markers in preoperative images is proposed on the basis of a sequence of two local mean-shift segmentation including removal of metal artifacts. The operation of the BeiDou-SNS consists of the following key steps: The selection of fiducial points, the calibration of surgical instruments, and the registration of patient space and image space. Eight cases of patients with titanium screws as fiducial markers were carried out to analyze the accuracy of the automatic fiducial point localization algorithm. Finally, a complete phantom experiment of zygomatic implant placement surgery was performed to evaluate the whole performance of BeiDou-SNS. Results and conclusionThe coverage of Euclidean distances between fiducial marker positions selected automatically and those selected manually by an experienced dentist for all eight cases ranged from 0.373 to 0.847mm. Four implants were inserted into the 3D-printed model under the guide of BeiDou-SNS. And the maximal deviations between the actual and planned implant were 1.328mm and 2.326mm, respectively, for the entry and end point while the angular deviation ranged from 1.094 degrees to 2.395 degrees. The results demonstrate that the oral surgical navigation system with automatic identification of fiducial points can meet the requirements of the clinical surgeries.

Place, publisher, year, edition, pages
SPRINGER HEIDELBERG, 2019
Keywords
Surgical navigation, Oral and maxillofacial surgery, Automatic identification, Target registration error, Fiducial registration error
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-245152 (URN)10.1007/s11548-018-1870-z (DOI)000458112800010 ()30317436 (PubMedID)2-s2.0-8505532751 (Scopus ID)
Note

QC 20190308

Available from: 2019-03-08 Created: 2019-03-08 Last updated: 2019-03-08Bibliographically approved
Bendazzoli, S., Brusini, I., Damberg, P., Smedby, Ö., Andersson, L. & Wang, C. (2019). Automatic rat brain segmentation from MRI using statistical shape models and random forest. In: Angelini, ED Landman, BA (Ed.), MEDICAL IMAGING 2019: IMAGE PROCESSING. Paper presented at Conference on Medical Imaging: Image Processing, FEB 19-21, 2019, San Diego, CA. SPIE-INT SOC OPTICAL ENGINEERING, Article ID 1094920.
Open this publication in new window or tab >>Automatic rat brain segmentation from MRI using statistical shape models and random forest
Show others...
2019 (English)In: MEDICAL IMAGING 2019: IMAGE PROCESSING / [ed] Angelini, ED Landman, BA, SPIE-INT SOC OPTICAL ENGINEERING , 2019, article id 1094920Conference paper, Published paper (Refereed)
Abstract [en]

In MRI neuroimaging, the shimming procedure is used before image acquisition to correct for inhomogeneity of the static magnetic field within the brain. To correctly adjust the field, the brain's location and edges must first be identified from quickly-acquired low resolution data. This process is currently carried out manually by an operator, which can be time-consuming and not always accurate. In this work, we implement a quick and automatic technique for brain segmentation to be potentially used during the shimming. Our method is based on two main steps. First, a random forest classifier is used to get a preliminary segmentation from an input MRI image. Subsequently, a statistical shape model of the brain, which was previously generated from ground-truth segmentations, is fitted to the output of the classifier to obtain a model-based segmentation mask. In this way, a-priori knowledge on the brain's shape is included in the segmentation pipeline. The proposed methodology was tested on low resolution images of rat brains and further validated on rabbit brain images of higher resolution. Our results suggest that the present method is promising for the desired purpose in terms of time efficiency, segmentation accuracy and repeatability. Moreover, the use of shape modeling was shown to be particularly useful when handling low-resolution data, which could lead to erroneous classifications when using only machine learning-based methods.

Place, publisher, year, edition, pages
SPIE-INT SOC OPTICAL ENGINEERING, 2019
Series
Proceedings of SPIE, ISSN 0277-786X ; 10949
Keywords
brain MRI, image segmentation, shimming, random forest, statistical shape model
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-260221 (URN)10.1117/12.2512409 (DOI)000483012700090 ()2-s2.0-85068344757 (Scopus ID)978-1-5106-2546-4 (ISBN)
Conference
Conference on Medical Imaging: Image Processing, FEB 19-21, 2019, San Diego, CA
Note

QC 20190930

Available from: 2019-09-30 Created: 2019-09-30 Last updated: 2019-09-30Bibliographically approved
Mårtensson, G., Ferreira, D., Cavallin, L., Muehlboeck, J.-S., Wahlund, L.-O., Wang, C. & Westman, E. (2019). AVRA: Automatic visual ratings of atrophy from MRI images using recurrent convolutional neural networks. NeuroImage: Clinical, 23, Article ID UNSP 101872.
Open this publication in new window or tab >>AVRA: Automatic visual ratings of atrophy from MRI images using recurrent convolutional neural networks
Show others...
2019 (English)In: NeuroImage: Clinical, ISSN 0353-8842, E-ISSN 2213-1582, Vol. 23, article id UNSP 101872Article in journal (Refereed) Published
Abstract [en]

Quantifying the degree of atrophy is done clinically by neuroradiologists following established visual rating scales. For these assessments to be reliable the rater requires substantial training and experience, and even then the rating agreement between two radiologists is not perfect. We have developed a model we call AVRA (Automatic Visual Ratings of Atrophy) based on machine learning methods and trained on 2350 visual ratings made by an experienced neuroradiologist. It provides fast and automatic ratings for Scheltens' scale of medial temporal atrophy (MTA), the frontal subscale of Pasquier's Global Cortical Atrophy (GCA-F) scale, and Koedam's scale of Posterior Atrophy (PA). We demonstrate substantial inter-rater agreement between AVRA's and a neuroradiologist ratings with Cohen's weighted kappa values of kappa(w) = 0.74/0.72 (MTA left/right), kappa(w) = 0.62 (GCA-F) and kappa(w) = 0.74 (PA). We conclude that automatic visual ratings of atrophy can potentially have great scientific value, and aim to present AVRA as a freely available toolbox.

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2019
Keywords
Atrophy, Visual ratings, Machine learning, MRI, Neuroimaging, Radiology
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-261348 (URN)10.1016/j.nicl.2019.101872 (DOI)000485804400063 ()31154242 (PubMedID)2-s2.0-85066258366 (Scopus ID)
Note

QC 20191004

Available from: 2019-10-04 Created: 2019-10-04 Last updated: 2019-10-04Bibliographically approved
Astaraki, M., Wang, C., Buizza, G., Toma-Dasu, I., Lazzeroni, M. & Smedby, Ö. (2019). Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method. Physica medica (Testo stampato), 60, 58-65
Open this publication in new window or tab >>Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method
Show others...
2019 (English)In: Physica medica (Testo stampato), ISSN 1120-1797, E-ISSN 1724-191X, Vol. 60, p. 58-65Article in journal (Refereed) Published
Abstract [en]

Purpose: To explore prognostic and predictive values of a novel quantitative feature set describing intra-tumor heterogeneity in patients with lung cancer treated with concurrent and sequential chemoradiotherapy. Methods: Longitudinal PET-CT images of 30 patients with non-small cell lung cancer were analysed. To describe tumor cell heterogeneity, the tumors were partitioned into one to ten concentric regions depending on their sizes, and, for each region, the change in average intensity between the two scans was calculated for PET and CT images separately to form the proposed feature set. To validate the prognostic value of the proposed method, radiomics analysis was performed and a combination of the proposed novel feature set and the classic radiomic features was evaluated. A feature selection algorithm was utilized to identify the optimal features, and a linear support vector machine was trained for the task of overall survival prediction in terms of area under the receiver operating characteristic curve (AUROC). Results: The proposed novel feature set was found to be prognostic and even outperformed the radiomics approach with a significant difference (AUROC(sALop) = 0.90 vs. AUROC(radiomic) = 0.71) when feature selection was not employed, whereas with feature selection, a combination of the novel feature set and radiomics led to the highest prognostic values. Conclusion: A novel feature set designed for capturing intra-tumor heterogeneity was introduced. Judging by their prognostic power, the proposed features have a promising potential for early survival prediction.

Place, publisher, year, edition, pages
ELSEVIER SCI LTD, 2019
Keywords
Survival prediction, Treatment response, Radiomics, Tumor heterogeneity, LONG ER, 1988, BIOMETRICS, V44, P837
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-251338 (URN)10.1016/j.ejmp.2019.03.024 (DOI)000464560200009 ()31000087 (PubMedID)2-s2.0-85063364742 (Scopus ID)
Note

QC 20190523

Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-10-09Bibliographically approved
Astaraki, M., Wang, C., Buizza, G., Toma-Dasu, I., Lazzeroni, M. & Smedby, Ö. (2019). Early survival prediction in non-small cell lung cancer with PET/CT size aware longitudinal pattern. Paper presented at 38th Annual Meeting of the European-Society-for-Radiotherapy-and-Oncology (ESTRO), APR 26-30, 2019, Milan, ITALY. Radiotherapy and Oncology, 133, S208-S209
Open this publication in new window or tab >>Early survival prediction in non-small cell lung cancer with PET/CT size aware longitudinal pattern
Show others...
2019 (English)In: Radiotherapy and Oncology, ISSN 0167-8140, E-ISSN 1879-0887, ISSN 0167-8140, Vol. 133, p. S208-S209Article in journal (Refereed) Published
Keywords
Oncology; Radiology, Nuclear Medicine & Medical Imaging
National Category
Medical and Health Sciences
Identifiers
urn:nbn:se:kth:diva-252991 (URN)10.1016/S0167-8140(19)30826-6 (DOI)000468315601037 ()
Conference
38th Annual Meeting of the European-Society-for-Radiotherapy-and-Oncology (ESTRO), APR 26-30, 2019, Milan, ITALY
Note

QC 20190729

Available from: 2019-07-29 Created: 2019-07-29 Last updated: 2019-09-30Bibliographically approved
Mahbod, A., Schaefer, G., Ellinger, I., Ecker, R., Pitiot, A. & Wang, C. (2019). Fusing fine-tuned deep features for skin lesion classification. Computerized Medical Imaging and Graphics, 71, 19-29
Open this publication in new window or tab >>Fusing fine-tuned deep features for skin lesion classification
Show others...
2019 (English)In: Computerized Medical Imaging and Graphics, ISSN 0895-6111, E-ISSN 1879-0771, Vol. 71, p. 19-29Article in journal (Refereed) Published
Abstract [en]

Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images.

Place, publisher, year, edition, pages
Elsevier, 2019
Keywords
Skin cancer; Melanoma; Dermoscopy; Medical image analysis; Deep learning
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-259367 (URN)10.1016/j.compmedimag.2018.10.007 (DOI)000458594700003 ()30458354 (PubMedID)2-s2.0-85056631170 (Scopus ID)
Note

QC 20190913

Available from: 2019-09-13 Created: 2019-09-13 Last updated: 2019-09-13Bibliographically approved
Mahbod, A., Schaefer, G., Wang, C., Ecker, R. & Ellinger, I. (2019). SKIN LESION CLASSIFICATION USING HYBRID DEEP NEURAL NETWORKS. In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP): . Paper presented at 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 12-17, 2019, Brighton, ENGLAND (pp. 1229-1233). IEEE
Open this publication in new window or tab >>SKIN LESION CLASSIFICATION USING HYBRID DEEP NEURAL NETWORKS
Show others...
2019 (English)In: 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE , 2019, p. 1229-1233Conference paper, Published paper (Refereed)
Abstract [en]

Skin cancer is one of the major types of cancers with an increasing incidence over the past decades. Accurately diagnosing skin lesions to discriminate between benign and malignant skin lesions is crucial to ensure appropriate patient treatment. While there are many computerised methods for skin lesion classification, convolutional neural networks (CNNs) have been shown to be superior over classical methods. In this work, we propose a fully automatic computerised method for skin lesion classification which employs optimised deep features from a number of well-established CNNs and from different abstraction levels. We use three pre-trained deep models, namely AlexNet, VGG16 and ResNet-18, as deep feature generators. The extracted features then are used to train support vector machine classifiers. In a final stage, the classifier outputs are fused to obtain a classification. Evaluated on the 150 validation images from the ISIC 2017 classification challenge, the proposed method is shown to achieve very good classification performance, yielding an area under receiver operating characteristic curve of 83.83% for melanoma classification and of 97.55% for seborrheic keratosis classification.

Place, publisher, year, edition, pages
IEEE, 2019
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
National Category
Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-261065 (URN)10.1109/ICASSP.2019.8683352 (DOI)000482554001092 ()2-s2.0-85068988327 (Scopus ID)978-1-4799-8131-1 (ISBN)
Conference
44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), MAY 12-17, 2019, Brighton, ENGLAND
Note

QC 20191002

Available from: 2019-10-02 Created: 2019-10-02 Last updated: 2019-10-02Bibliographically approved
Mahbod, A., Chowdhury, M., Smedby, Ö. & Wang, C. (2018). Automatic brain segmentation using artificial neural networks with shape context. Pattern Recognition Letters, 101, 74-79
Open this publication in new window or tab >>Automatic brain segmentation using artificial neural networks with shape context
2018 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 101, p. 74-79Article in journal (Refereed) Published
Abstract [en]

Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Medical Image Processing
Identifiers
urn:nbn:se:kth:diva-219889 (URN)10.1016/j.patrec.2017.11.016 (DOI)000418101400011 ()2-s2.0-85036471005 (Scopus ID)
Note

QC 20171215

Available from: 2017-12-15 Created: 2017-12-15 Last updated: 2019-10-28Bibliographically approved
Wang, C. & Smedby, Ö. (2018). Automatic whole heart segmentation using deep learning and shape context. In: 8th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2017, Held in Conjunction with MICCAI 2017: . Paper presented at 8th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2017, Held in Conjunction with MICCAI 2017, Quebec City, Canada, 10 September 2017 through 14 September 2017 (pp. 242-249). Springer, 10663
Open this publication in new window or tab >>Automatic whole heart segmentation using deep learning and shape context
2018 (English)In: 8th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2017, Held in Conjunction with MICCAI 2017, Springer, 2018, Vol. 10663, p. 242-249Conference paper, Published paper (Refereed)
Abstract [en]

To assist 3D cardiac image analysis, we propose an automatic whole heart segmentation using a deep learning framework combined with shape context information that is encoded in volumetric shape models. The proposed processing pipeline consists of three major steps: scout segmentation with orthogonal 2D U-nets, shape context estimation and refining segmentation with U-net and shape context. The proposed method was evaluated using the MMWHS challenge data. Two sets of networks were trained separately for contrast-enhanced CT and MRI. On the 20 training datasets, using 5-fold cross-validation, the average Dice coefficients for the left ventricle, the right ventricle, the left atrium, the right atrium and the myocardium of the left ventricle were 0.895, 0.795, 0.847, 0.821, 0.807 for MRI and 0.935, 0.825, 0.908, 0.881, 0.879 for CT, respectively. Further improvement may be possible given more training data or advanced data augmentation strategy.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743 ; 10663
Keywords
Deep learning, Fully convolutional network, Heart segmentation, Shape context, Statistic shape model
National Category
Medical and Health Sciences
Identifiers
urn:nbn:se:kth:diva-225494 (URN)10.1007/978-3-319-75541-0_26 (DOI)2-s2.0-85044467877 (Scopus ID)9783319755403 (ISBN)
Conference
8th International Workshop on Statistical Atlases and Computational Models of the Heart, STACOM 2017, Held in Conjunction with MICCAI 2017, Quebec City, Canada, 10 September 2017 through 14 September 2017
Funder
Swedish Heart Lung Foundation, 2016-0609Swedish Research Council, 2014-6153
Note

QC 20180406

Available from: 2018-04-06 Created: 2018-04-06 Last updated: 2018-04-06Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0442-3524

Search in DiVA

Show all publications