Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep learning is combined with massive-scale citizen science to improve large-scale image classification
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0001-6176-108X
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Protein Science, Cellular and Clinical Proteomics. KTH, Centres, Science for Life Laboratory, SciLifeLab.
Show others and affiliations
2018 (English)In: Nature Biotechnology, ISSN 1087-0156, E-ISSN 1546-1696, Vol. 36, no 9, p. 820-+Article in journal (Refereed) Published
Abstract [en]

Pattern recognition and classification of images are key challenges throughout the life sciences. We combined two approaches for large-scale classification of fluorescence microscopy images. First, using the publicly available data set from the Cell Atlas of the Human Protein Atlas (HPA), we integrated an image-classification task into a mainstream video game (EVE Online) as a mini-game, named Project Discovery. Participation by 322,006 gamers over 1 year provided nearly 33 million classifications of subcellular localization patterns, including patterns that were not previously annotated by the HPA. Second, we used deep learning to build an automated Localization Cellular Annotation Tool (Loc-CAT). This tool classifies proteins into 29 subcellular localization patterns and can deal efficiently with multi-localization proteins, performing robustly across different cell types. Combining the annotations of gamers and deep learning, we applied transfer learning to create a boosted learner that can characterize subcellular protein distribution with F1 score of 0.72. We found that engaging players of commercial computer games provided data that augmented deep learning and enabled scalable and readily improved image classification.

Place, publisher, year, edition, pages
NATURE PUBLISHING GROUP , 2018. Vol. 36, no 9, p. 820-+
National Category
Biological Sciences
Identifiers
URN: urn:nbn:se:kth:diva-235602DOI: 10.1038/nbt.4225ISI: 000443986000023PubMedID: 30125267Scopus ID: 2-s2.0-85053076602OAI: oai:DiVA.org:kth-235602DiVA, id: diva2:1252156
Note

QC 20181001

Available from: 2018-10-01 Created: 2018-10-01 Last updated: 2018-10-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records BETA

Smith, KevinLundberg, Emma

Search in DiVA

By author/editor
Sullivan, Devin P.Winsnes, Casper F.Åkesson, LovisaHjelmare, MartinWiking, MikaelaSchutten, RutgerSmith, KevinLundberg, Emma
By organisation
Cellular and Clinical ProteomicsScience for Life Laboratory, SciLifeLabSchool of Engineering Sciences in Chemistry, Biotechnology and Health (CBH)Computational Science and Technology (CST)
In the same journal
Nature Biotechnology
Biological Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 267 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf