Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Automatic brain segmentation using artificial neural networks with shape context
KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Medicinsk bildbehandling och visualisering.
KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Medicinsk bildbehandling och visualisering.
KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Medicinsk bildbehandling och visualisering.ORCID-id: 0000-0002-7750-1917
KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik, Medicinsk bildbehandling och visualisering.ORCID-id: 0000-0002-0442-3524
2018 (engelsk)Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 101, s. 74-79Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Segmenting brain tissue from MR scans is thought to be highly beneficial for brain abnormality diagnosis, prognosis monitoring, and treatment evaluation. Many automatic or semi-automatic methods have been proposed in the literature in order to reduce the requirement of user intervention, but the level of accuracy in most cases is still inferior to that of manual segmentation. We propose a new brain segmentation method that integrates volumetric shape models into a supervised artificial neural network (ANN) framework. This is done by running a preliminary level-set based statistical shape fitting process guided by the image intensity and then passing the signed distance maps of several key structures to the ANN as feature channels, in addition to the conventional spatial-based and intensity-based image features. The so-called shape context information is expected to help the ANN to learn local adaptive classification rules instead of applying universal rules directly on the local appearance features. The proposed method was tested on a public datasets available within the open MICCAI grand challenge (MRBrainS13). The obtained average Dice coefficient were 84.78%, 88.47%, 82.76%, 95.37% and 97.73% for gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), brain (WM + GM) and intracranial volume respectively. Compared with other methods tested on the same dataset, the proposed method achieved competitive results with comparatively shorter training time.

sted, utgiver, år, opplag, sider
Elsevier, 2018. Vol. 101, s. 74-79
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-219889DOI: 10.1016/j.patrec.2017.11.016ISI: 000418101400011Scopus ID: 2-s2.0-85036471005OAI: oai:DiVA.org:kth-219889DiVA, id: diva2:1166503
Merknad

QC 20171215

Tilgjengelig fra: 2017-12-15 Laget: 2017-12-15 Sist oppdatert: 2022-06-26bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Mahbod, AmirrezaChowdhury, ManishSmedby, ÖrjanWang, Chunliang

Søk i DiVA

Av forfatter/redaktør
Mahbod, AmirrezaChowdhury, ManishSmedby, ÖrjanWang, Chunliang
Av organisasjonen
I samme tidsskrift
Pattern Recognition Letters

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 368 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf