Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Automatic brain segmentation using artificial neural networks with shape context
KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.
KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.ORCID iD: 0000-0002-7750-1917
KTH, School of Technology and Health (STH), Medical Engineering, Medical Image Processing and Visualization.ORCID iD: 0000-0002-0442-3524
2018 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 101, p. 74-79Article in journal (Refereed) Published
Abstract [en]

Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.

Place, publisher, year, edition, pages
Elsevier, 2018. Vol. 101, p. 74-79
National Category
Medical Image Processing
Identifiers
URN: urn:nbn:se:kth:diva-219889DOI: 10.1016/j.patrec.2017.11.016Scopus ID: 2-s2.0-85036471005OAI: oai:DiVA.org:kth-219889DiVA, id: diva2:1166503
Note

QC 20171215

Available from: 2017-12-15 Created: 2017-12-15 Last updated: 2017-12-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Mahbod, AmirrezaChowdhury, ManishSmedby, ÖrjanWang, Chunliang

Search in DiVA

By author/editor
Mahbod, AmirrezaChowdhury, ManishSmedby, ÖrjanWang, Chunliang
By organisation
Medical Image Processing and Visualization
In the same journal
Pattern Recognition Letters
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 112 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf