kth.sePublications
Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Batool, Nazre
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging.
    Chowdhury, Manish
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging.
    Smedby, Örjan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging.
    Moreno, Rodrigo
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Medical Imaging.
    Estimation of trabecular bone thickness in gray scale: a validation study2017In: International Journal of Computer Assisted Radiology and Surgery, ISSN 1861-6410, Vol. 12, no Supplement 1Article in journal (Refereed)
  • 2. Bora, K.
    et al.
    Chowdhury, Manish
    KTH, School of Technology and Health (STH).
    Mahanta, L. B.
    Kundu, M. K.
    Das, A. K.
    Pap smear image classification using convolutional neural network2016In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2016Conference paper (Refereed)
    Abstract [en]

    This article presents the result of a comprehensive study on deep learning based Computer Aided Diagnostic techniques for classification of cervical dysplasia using Pap smear images. All the experiments are performed on a real indigenous image database containing 1611 images, generated at two diagnostic centres. Focus is given on constructing an effective feature vector which can perform multiple level of representation of the features hidden in a Pap smear image. For this purpose Deep Convolutional Neural Network is used, followed by feature selection using an unsupervised technique with Maximal Information Compression Index as similarity measure. Finally performance of two classifiers namely Least Square Support Vector Machine (LSSVM) and Softmax Regression are monitored and classifier selection is performed based on five measures along with five fold cross validation technique. Output classes reflects the established Bethesda system of classification for identifying pre-cancerous and cancerous lesion of cervix. The proposed system is also compared with two existing conventional systems and also tested on a publicly available database. Experimental results and comparison shows that proposed system performs efficiently in Pap smear classification.

  • 3. Bora, Kangkana
    et al.
    Chowdhury, Manish
    KTH, School of Technology and Health (STH).
    Mahanta, Lipi B.
    Kundu, Malay Kumar
    Das, Anup Kumar
    Automated classification of Pap smear images to detect cervical dysplasia2017In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 138, p. 31-47Article in journal (Refereed)
    Abstract [en]

    Background and objectives: The present study proposes an intelligent system for automatic categorization of Pap smear images to detect cervical dysplasia, which has been an open problem ongoing for last five decades. Methods: The classification technique is based on shape, texture and color features. It classifies the cervical dysplasia into two-level (normal and abnormal) and three-level (Negative for Intraepithelial Lesion or Malignancy, Low-grade Squamous Intraepithelial Lesion and High-grade Squamous Intraepithelial Lesion) classes reflecting the established Bethesda system of classification used for diagnosis of cancerous or precancerous lesion of cervix. The system is evaluated on two generated databases obtained from two diagnostic centers, one containing 1610 single cervical cells and the other 1320 complete smear level images. The main objective of this database generation is to categorize the images according to the Bethesda system of classification both of which require lots of training and expertise. The system is also trained and tested on the benchmark Herlev University database which is publicly available. In this contribution a new segmentation technique has also been proposed for extracting shape features. Ripplet Type I transform, Histogram first order statistics and Gray Level Co-occurrence Matrix have been used for color and texture features respectively. To improve classification results, ensemble method is used, which integrates the decision of three classifiers. Assessments are performed using 5 fold cross validation. Results: Extended experiments reveal that the proposed system can successfully classify Pap smear images performing significantly better when compared with other existing methods. Conclusion: This type of automated cancer classifier will be of particular help in early detection of cancer.

  • 4.
    Chowdhury, Manish
    et al.
    KTH, School of Technology and Health (STH).
    Jörgens, Daniel
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Wang, Chunliang
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization. KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Smedby, Örjan
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Moreno, Rodrigo
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Segmentation of Cortical Bone using Fast Level Sets2017In: MEDICAL IMAGING 2017: IMAGE PROCESSING / [ed] Styner, MA Angelini, ED, SPIE - International Society for Optical Engineering, 2017, article id UNSP 1013327Conference paper (Refereed)
    Abstract [en]

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  • 5.
    Chowdhury, Manish
    et al.
    KTH, School of Technology and Health (STH).
    Klintström, Benjamin
    KTH, School of Technology and Health (STH). Linköping University, Sweden.
    Klintström, E.
    Smedby, Örjan
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization. Linköping University, Sweden.
    Moreno, Rodrigo
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Granulometry-based trabecular bone segmentation2017In: 20th Scandinavian Conference on Image Analysis, SCIA 2017, Springer, 2017, Vol. 10270, p. 100-108Conference paper (Refereed)
    Abstract [en]

    The accuracy of the analyses for studying the three dimensional trabecular bone microstructure rely on the quality of the segmentation between trabecular bone and bone marrow. Such segmentation is challenging for images from computed tomography modalities that can be used in vivo due to their low contrast and resolution. For this purpose, we propose in this paper a granulometry-based segmentation method. In a first step, the trabecular thickness is estimated by using the granulometry in gray scale, which is generated by applying the opening morphological operation with ball-shaped structuring elements of different diameters. This process mimics the traditional sphere-fitting method used for estimating trabecular thickness in segmented images. The residual obtained after computing the granulometry is compared to the original gray scale value in order to obtain a measurement of how likely a voxel belongs to trabecular bone. A threshold is applied to obtain the final segmentation. Six histomorphometric parameters were computed on 14 segmented bone specimens imaged with cone-beam computed tomography (CBCT), considering micro-computed tomography (micro-CT) as the ground truth. Otsu’s thresholding and Automated Region Growing (ARG) segmentation methods were used for comparison. For three parameters (Tb.N, Tb.Th and BV/TV), the proposed segmentation algorithm yielded the highest correlations with micro-CT, while for the remaining three (Tb.Nd, Tb.Tm and Tb.Sp), its performance was comparable to ARG. The method also yielded the strongest average correlation (0.89). When Tb.Th was computed directly from the gray scale images, the correlation was superior to the binary-based methods. The results suggest that the proposed algorithm can be used for studying trabecular bone in vivo through CBCT.

  • 6.
    Chowdhury, Manish
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Rota Bulò, S.
    Moreno, Rodrigo
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Kundu, M.K.
    Smedby, Örjan
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    An Efficient Radiographic Image Retrieval System Using Convolutional Neural Network2016In: 2016 23rd International Conference on Pattern Recognition (ICPR), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 3134-3139, article id 7900116Conference paper (Refereed)
    Abstract [en]

    Content-Based Medical Image Retrieval (CBMIR) is an important research field in the context of medical data management. In this paper we propose a novel CBMIR system for the automatic retrieval of radiographic images. Our approach employs a Convolutional Neural Network (CNN) to obtain high- level image representations that enable a coarse retrieval of images that are in correspondence to a query image. The retrieved set of images is refined via a non-parametric estimation of putative classes for the query image, which are used to filter out potential outliers in favour of more relevant images belonging to those classes. The refined set of images is finally re-ranked using Edge Histogram Descriptor, i.e. a low-level edge-based image descriptor that allows to capture finer similarities between the retrieved set of images and the query image. To improve the computational efficiency of the system, we employ dimensionality reduction via Principal Component Analysis (PCA). Experiments were carried out to evaluate the effectiveness of the proposed system on medical data from the “Image Retrieval in Medical Applications” (IRMA) benchmark database. The obtained results show the effectiveness of the proposed CBMIR system in the field of medical image retrieval.

    Download full text (pdf)
    fulltext
  • 7.
    Hussain, Elima
    et al.
    Inst Adv Study Sci & Technol, Cent Computat & Numer Sci Div, Gauhati, Assam, India..
    Mahanta, Lipi B.
    Inst Adv Study Sci & Technol, Cent Computat & Numer Sci Div, Gauhati, Assam, India..
    Das, Chandana Ray
    Guwahati Med Coll & Hosp, Gauhati, Assam, India..
    Choudhury, Manjula
    Guwahati Med Coll & Hosp, Gauhati, Assam, India..
    Chowdhury, Manish
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
    A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in Pap smear images2020In: Artificial Intelligence in Medicine, ISSN 0933-3657, E-ISSN 1873-2860, Vol. 107, article id 101897Article in journal (Refereed)
    Abstract [en]

    Pap smear is often employed as a screening test for diagnosing cervical pre-cancerous and cancerous lesions. Accurate identification of dysplastic changes amongst the cervical cells in a Pap smear image is thus essential for rapid diagnosis and prognosis. Manual pathological observations used in clinical practice require exhaustive analysis of thousands of cell nuclei in a whole slide image to visualize the dysplastic nuclear changes which make the process tedious and time-consuming. Automated nuclei segmentation and classification exist but are chal-lenging to overcome issues like nuclear intra-class variability and clustered nuclei separation. To address such challenges, we put forward an application of instance segmentation and classification framework built on an Unet architecture by adding residual blocks, densely connected blocks and a fully convolutional layer as a bottleneck between encoder-decoder blocks for Pap smear images. The number of convolutional layers in the standard Unet has been replaced by densely connected blocks to ensure feature reuse-ability property while the introduction of residual blocks in the same attempts to converge the network more rapidly. The framework provides simultaneous nuclei instance segmentation and also predicts the type of nucleus class as belonging to normal and abnormal classes from the smear images. It works by assigning pixel-wise labels to individual nuclei in a whole slide image which enables identifying multiple nuclei belonging to the same or different class as individual distinct instances. Introduction of a joint loss function in the framework overcomes some trivial cell level issues on clustered nuclei separation. To increase the robustness of the overall framework, the proposed model is preceded with a stacked auto-encoder based shape representation learning model. The proposed model outperforms two state-of-the-art deep learning models Unet and Mask_RCNN with an average Zijdenbos simi-larity index of 97 % related to segmentation along with binary classification accuracy of 98.8 %. Experiments on hospital-based datasets using liquid-based cytology and conventional pap smear methods along with benchmark Herlev datasets proved the superiority of the proposed method than Unet and Mask_RCNN models in terms of the evaluation metrics under consideration.

  • 8. Kundu, M. K.
    et al.
    Chowdhury, Manish
    KTH, School of Technology and Health (STH).
    Das, S.
    Interactive radiographic image retrieval system2017In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 139, p. 209-220Article in journal (Refereed)
    Abstract [en]

    Background and Objective Content based medical image retrieval (CBMIR) systems enable fast diagnosis through quantitative assessment of the visual information and is an active research topic over the past few decades. Most of the state-of-the-art CBMIR systems suffer from various problems: computationally expensive due to the usage of high dimensional feature vectors and complex classifier/clustering schemes. Inability to properly handle the “semantic gap” and the high intra-class versus inter-class variability problem of the medical image database (like radiographic image database). This yields an exigent demand for developing highly effective and computationally efficient retrieval system. Methods We propose a novel interactive two-stage CBMIR system for diverse collection of medical radiographic images. Initially, Pulse Coupled Neural Network based shape features are used to find out the most probable (similar) image classes using a novel “similarity positional score” mechanism. This is followed by retrieval using Non-subsampled Contourlet Transform based texture features considering only the images of the pre-identified classes. Maximal information compression index is used for unsupervised feature selection to achieve better results. To reduce the semantic gap problem, the proposed system uses a novel fuzzy index based relevance feedback mechanism by incorporating subjectivity of human perception in an analytic manner. Results Extensive experiments were carried out to evaluate the effectiveness of the proposed CBMIR system on a subset of Image Retrieval in Medical Applications (IRMA)-2009 database consisting of 10,902 labeled radiographic images of 57 different modalities. We obtained overall average precision of around 98% after only 2–3 iterations of relevance feedback mechanism. We assessed the results by comparisons with some of the state-of-the-art CBMIR systems for radiographic images. Conclusions Unlike most of the existing CBMIR systems, in the proposed two-stage hierarchical framework, main importance is given on constructing efficient and compact feature vector representation, search-space reduction and handling the “semantic gap” problem effectively, without compromising the retrieval performance. Experimental results and comparisons show that the proposed system performs efficiently in the radiographic medical image retrieval field.

  • 9.
    Mahbod, Amirreza
    et al.
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Chowdhury, Manish
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Smedby, Örjan
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Wang, Chunliang
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Automatic brain segmentation using artificial neural networks with shape context2018In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 101, p. 74-79Article in journal (Refereed)
    Abstract [en]

    Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.

  • 10.
    Mondal, Jaydeb
    et al.
    Indian Stat Inst, Machine Intelligence Unit, 203 BT Rd, Kolkata 700108, India..
    Kundu, Malay Kumar
    Indian Stat Inst, Machine Intelligence Unit, 203 BT Rd, Kolkata 700108, India..
    Das, Sudeb
    Videonet Technol Pvt Ltd, Salt Lake City 700091, UT, India..
    Chowdhury, Manish
    KTH, School of Technology and Health (STH).
    Video shot boundary detection using multiscale geometric analysis of nsct and least squares support vector machine2018In: Multimedia tools and applications, ISSN 1380-7501, E-ISSN 1573-7721, Vol. 77, no 7, p. 8139-8161Article in journal (Refereed)
    Abstract [en]

    The fundamental step in video content analysis is the temporal segmentation of video stream into shots, which is known as Shot Boundary Detection (SBD). The sudden transition from one shot to another is known as Abrupt Transition (AT), whereas if the transition occurs over several frames, it is called Gradual Transition (GT). A unified framework for the simultaneous detection of both AT and GT have been proposed in this article. The proposed method uses the multiscale geometric analysis of Non-Subsampled Contourlet Transform (NSCT) for feature extraction from the video frames. The dimension of the feature vectors generated using NSCT is reduced through principal component analysis to simultaneously achieve computational efficiency and performance improvement. Finally, cost efficient Least Squares Support Vector Machine (LS-SVM) classifier is used to classify the frames of a given video sequence based on the feature vectors into No-Transition (NT), AT and GT classes. A novel efficient method of training set generation is also proposed which not only reduces the training time but also improves the performance. The performance of the proposed technique is compared with several state-of-the-art SBD methods on TRECVID 2007 and TRECVID 2001 test data. The empirical results show the effectiveness of the proposed algorithm.

  • 11.
    Platten, Michael
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
    Chowdhury, Manish
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
    Smedby, Örjan
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Moreno, Rodrigo
    KTH, School of Technology and Health (STH), Medical Engineering, Medical Imaging.
    Estimation of trabecular thickness in grayscale: an in vivo study2017In: ESSR 2017 / P-0196, 2017Conference paper (Refereed)
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf