Change search
Link to record
Permanent link

Direct link
BETA
Publications (7 of 7) Show all publications
Sullivan, D. P., Winsnes, C. F., Åkesson, L., Hjelmare, M., Wiking, M., Schutten, R., . . . Lundberg, E. (2018). Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nature Biotechnology, 36(9), 820-+
Open this publication in new window or tab >>Deep learning is combined with massive-scale citizen science to improve large-scale image classification
Show others...
2018 (English)In: Nature Biotechnology, ISSN 1087-0156, E-ISSN 1546-1696, Vol. 36, no 9, p. 820-+Article in journal (Refereed) Published
Abstract [en]

Pattern recognition and classification of images are key challenges throughout the life sciences. We combined two approaches for large-scale classification of fluorescence microscopy images. First, using the publicly available data set from the Cell Atlas of the Human Protein Atlas (HPA), we integrated an image-classification task into a mainstream video game (EVE Online) as a mini-game, named Project Discovery. Participation by 322,006 gamers over 1 year provided nearly 33 million classifications of subcellular localization patterns, including patterns that were not previously annotated by the HPA. Second, we used deep learning to build an automated Localization Cellular Annotation Tool (Loc-CAT). This tool classifies proteins into 29 subcellular localization patterns and can deal efficiently with multi-localization proteins, performing robustly across different cell types. Combining the annotations of gamers and deep learning, we applied transfer learning to create a boosted learner that can characterize subcellular protein distribution with F1 score of 0.72. We found that engaging players of commercial computer games provided data that augmented deep learning and enabled scalable and readily improved image classification.

Place, publisher, year, edition, pages
NATURE PUBLISHING GROUP, 2018
National Category
Biological Sciences
Identifiers
urn:nbn:se:kth:diva-235602 (URN)10.1038/nbt.4225 (DOI)000443986000023 ()30125267 (PubMedID)2-s2.0-85053076602 (Scopus ID)
Note

QC 20181001

Available from: 2018-10-01 Created: 2018-10-01 Last updated: 2019-09-17Bibliographically approved
Robertson, S., Azizpour, H., Smith, K. & Hartman, J. (2018). Digital image analysis in breast pathology-from image processing techniques to artificial intelligence. Translational Research: The Journal of Laboratory and Clinical Medicine, 194, 19-35
Open this publication in new window or tab >>Digital image analysis in breast pathology-from image processing techniques to artificial intelligence
2018 (English)In: Translational Research: The Journal of Laboratory and Clinical Medicine, ISSN 1931-5244, E-ISSN 1878-1810, Vol. 194, p. 19-35Article, review/survey (Refereed) Published
Abstract [en]

Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.

Place, publisher, year, edition, pages
ELSEVIER SCIENCE INC, 2018
National Category
Cancer and Oncology Radiology, Nuclear Medicine and Medical Imaging
Identifiers
urn:nbn:se:kth:diva-226196 (URN)10.1016/j.trsl.2017.10.010 (DOI)000428608600002 ()29175265 (PubMedID)2-s2.0-85036635168 (Scopus ID)
Note

QC 20180518

Available from: 2018-05-18 Created: 2018-05-18 Last updated: 2019-09-18Bibliographically approved
Brasko, C., Smith, K., Molnar, C., Farago, N., Hegedus, L., Balind, A., . . . Horvath, P. (2018). Intelligent image-based in situ single-cell isolation. Nature Communications, 9, Article ID 226.
Open this publication in new window or tab >>Intelligent image-based in situ single-cell isolation
Show others...
2018 (English)In: Nature Communications, ISSN 2041-1723, E-ISSN 2041-1723, Vol. 9, article id 226Article in journal (Refereed) Published
Abstract [en]

Quantifying heterogeneities within cell populations is important for many fields including cancer research and neurobiology; however, techniques to isolate individual cells are limited. Here, we describe a high-throughput, non-disruptive, and cost-effective isolation method that is capable of capturing individually targeted cells using widely available techniques. Using high-resolution microscopy, laser microcapture microscopy, image analysis, and machine learning, our technology enables scalable molecular genetic analysis of single cells, targetable by morphology or location within the sample.

Place, publisher, year, edition, pages
NATURE PUBLISHING GROUP, 2018
National Category
Medical Biotechnology
Identifiers
urn:nbn:se:kth:diva-221926 (URN)10.1038/s41467-017-02628-4 (DOI)000422647600023 ()29335532 (PubMedID)2-s2.0-85040796437 (Scopus ID)
Note

QC 20180131

Available from: 2018-01-31 Created: 2018-01-31 Last updated: 2019-09-16Bibliographically approved
Smith, K., Piccinini, F., Balassa, T., Koos, K., Danka, T., Azizpour, H. & Horvath, P. (2018). Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays. CELL SYSTEMS, 6(6), 636-653
Open this publication in new window or tab >>Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays
Show others...
2018 (English)In: CELL SYSTEMS, ISSN 2405-4712, Vol. 6, no 6, p. 636-653Article, review/survey (Refereed) Published
Abstract [en]

Phenotypic image analysis is the task of recognizing variations in cell properties using microscopic image data. These variations, produced through a complex web of interactions between genes and the environment, may hold the key to uncover important biological phenomena or to understand the response to a drug candidate. Today, phenotypic analysis is rarely performed completely by hand. The abundance of high-dimensional image data produced by modern high-throughput microscopes necessitates computational solutions. Over the past decade, a number of software tools have been developed to address this need. They use statistical learning methods to infer relationships between a cell's phenotype and data from the image. In this review, we examine the strengths and weaknesses of non-commercial phenotypic image analysis software, cover recent developments in the field, identify challenges, and give a perspective on future possibilities.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Bioinformatics and Systems Biology
Identifiers
urn:nbn:se:kth:diva-232249 (URN)10.1016/j.cels.2018.06.001 (DOI)000436877800002 ()29953863 (PubMedID)2-s2.0-85048445198 (Scopus ID)
Funder
Science for Life Laboratory - a national resource center for high-throughput molecular bioscience
Note

QC 20180720

Available from: 2018-07-20 Created: 2018-07-20 Last updated: 2019-09-17Bibliographically approved
Piccinini, F., Balassa, T., Szkalisity, A., Molnar, C., Paavolainen, L., Kujala, K., . . . Horvath, P. (2017). Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data. CELL SYSTEMS, 4(6), 651-+
Open this publication in new window or tab >>Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data
Show others...
2017 (English)In: CELL SYSTEMS, ISSN 2405-4712, Vol. 4, no 6, p. 651-+Article in journal (Refereed) Published
Abstract [en]

High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org.

Place, publisher, year, edition, pages
CELL PRESS, 2017
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-211391 (URN)10.1016/j.cels.2017.05.012 (DOI)000405450500013 ()28647475 (PubMedID)2-s2.0-85020911088 (Scopus ID)
Note

QC 20170808

Available from: 2017-08-08 Created: 2017-08-08 Last updated: 2019-09-17Bibliographically approved
Carlsson, S., Azizpour, H., Razavian, A. S., Sullivan, J. & Smith, K. (2017). The Preimage of Rectifier Network Activities. In: International Conference on Learning Representations (ICLR): . Paper presented at International Conference on Learning Representations (ICLR).
Open this publication in new window or tab >>The Preimage of Rectifier Network Activities
Show others...
2017 (English)In: International Conference on Learning Representations (ICLR), 2017Conference paper, Published paper (Refereed)
Abstract [en]

The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

National Category
Computer Vision and Robotics (Autonomous Systems) Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-259164 (URN)2-s2.0-85071123889 (Scopus ID)
Conference
International Conference on Learning Representations (ICLR)
Note

QC 20190916

Available from: 2019-09-11 Created: 2019-09-11 Last updated: 2019-09-16Bibliographically approved
Fusco, L., Lefort, R., Smith, K., Benmansour, F., Gonzalez, G., Barillari, C., . . . Pertz, O. (2016). Computer vision profiling of neurite outgrowth dynamics reveals spatio-temporal modularity of Rho GTPase signaling. Journal of Cell Biology, 212(1), 91-111
Open this publication in new window or tab >>Computer vision profiling of neurite outgrowth dynamics reveals spatio-temporal modularity of Rho GTPase signaling
Show others...
2016 (English)In: Journal of Cell Biology, ISSN 0021-9525, E-ISSN 1540-8140, Vol. 212, no 1, p. 91-111Article in journal (Refereed) Published
Abstract [en]

Rho guanosine triphosphatases (GTPases) control the cytoskeletal dynamics that power neurite outgrowth. This process consists of dynamic neuriteinitiation, elongation, retraction, and branching cycles that are likely to be regulated by specific spatiotemporal signaling networks, which cannot be resolved with static, steady-state assays. We present Neurite-Tracker, a computer-vision approach to automatically segment and track neuronal morphodynamics in time-lapse datasets. Feature extraction then quantifies dynamic neurite outgrowth phenotypes. We identify a set of stereotypic neurite outgrowth morphodynamic behaviors in a cultured neuronal cell system. Systematic RNA interference perturbation of a Rho GTPase interactome consisting of 219 proteins reveals a limited set of morphodynamic phenotypes. As proof of concept, we show that loss of function of two distinct RhoA-specific GTPase-activating proteins (GAPs) leads to opposite neurite outgrowth phenotypes. Imaging of RhoA activation dynamics indicates that both GAPs regulate different spatiotemporal Rho GTPase pools, with distinct functions. Our results provide a starting point to dissect spatiotemporal Rho GTPase signaling networks that regulate neurite outgrowth.

Place, publisher, year, edition, pages
Rockefeller University Press, 2016
National Category
Cell Biology
Identifiers
urn:nbn:se:kth:diva-181938 (URN)10.1083/jcb.201506018 (DOI)000370486100010 ()
Note

QC 20160224. QC 20160319

Available from: 2016-02-09 Created: 2016-02-09 Last updated: 2017-11-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6163-191X

Search in DiVA

Show all publications