Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Scalable Machine Learning through Approximation and Distributed Computing
KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Beräkningsvetenskap och beräkningsteknik (CST). RISE.ORCID-id: 0000-0002-8180-7521
2019 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Machine learning algorithms are now being deployed in practically all areas of our lives. Part of this success can be attributed to the ability to learn complex representations from massive datasets. However, computational speed increases have not kept up with the increase in the sizes of data we want to learn from, leading naturally to algorithms that need to be resource-efficient and parallel. As the proliferation of machine learning continues, the ability for algorithms to adapt to a changing environment and deal with uncertainty becomes increasingly important.

In this thesis we develop scalable machine learning algorithms, with a focus on efficient, online, and distributed computation. We make use of approximations to dramatically reduce the computational cost of exact algorithms, and develop online learning algorithms to deal with a constantly changing environment under a tight computational budget. We design parallel and distributed algorithms to ensure that our methods can scale to massive datasets.

We first propose a scalable algorithm for graph vertex similarity calculation and concept discovery. We demonstrate its applicability to multiple domains, including text, music, and images, and demonstrate its scalability by training on one of the largest text corpora available. Then, motivated by a real-world use case of predicting the session length in media streaming, we propose improvements to several aspects of learning with decision trees. We propose two algorithms to estimate the uncertainty in the predictions of online random forests. We show that our approach can achieve better accuracy than the state of the art while being an order of magnitude faster. We then propose a parallel and distributed online tree boosting algorithm that maintains the correctness guarantees of serial algorithms while providing an order of magnitude speedup on average. Finally, we propose an algorithm that allows for gradient boosted trees training to be distributed across both the data point and feature dimensions. We show that we can achieve communication savings of several orders of magnitude for sparse datasets, compared to existing approaches that can only distribute the computation across the data point dimension and use dense communication.

sted, utgiver, år, opplag, sider
KTH Royal Institute of Technology, 2019. , s. 120
Serie
TRITA-EECS-AVL ; 2019:43
Emneord [en]
Online Learning, Distributed Computing, Graph Similarity, Decision Trees, Gradient Boosting
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-250038ISBN: 978-91-7873-181-7 (tryckt)OAI: oai:DiVA.org:kth-250038DiVA, id: diva2:1307122
Disputas
2019-05-28, Sal B, Kistagången 16, våningsplan 2, Electrum 1, KTH Kista, Kista, 14:00 (engelsk)
Opponent
Veileder
Forskningsfinansiär
Swedish Foundation for Strategic Research , RIT10-0043Swedish Foundation for Strategic Research , BD15-0006
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-26 Laget: 2019-04-25 Sist oppdatert: 2022-12-12bibliografisk kontrollert
Delarbeid
1. Domain-agnostic discovery of similarities and concepts at scale
Åpne denne publikasjonen i ny fane eller vindu >>Domain-agnostic discovery of similarities and concepts at scale
2017 (engelsk)Inngår i: Knowledge and Information Systems, ISSN 0219-1377, E-ISSN 0219-3116, Vol. 51, nr 2, s. 531-560Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Appropriately defining and efficiently calculating similarities from large data sets are often essential in data mining, both for gaining understanding of data and generating processes and for building tractable representations. Given a set of objects and their correlations, we here rely on the premise that each object is characterized by its context, i.e., its correlations to the other objects. The similarity between two objects can then be expressed in terms of the similarity between their contexts. In this way, similarity pertains to the general notion that objects are similar if they are exchangeable in the data. We propose a scalable approach for calculating all relevant similarities among objects by relating them in a correlation graph that is transformed to a similarity graph. These graphs can express rich structural properties among objects. Specifically, we show that concepts—abstractions of objects—are constituted by groups of similar objects that can be discovered by clustering the objects in the similarity graph. These principles and methods are applicable in a wide range of fields and will be demonstrated here in three domains: computational linguistics, music, and molecular biology, where the numbers of objects and correlations range from small to very large.

sted, utgiver, år, opplag, sider
London: Springer, 2017
Emneord
Similarity discovery, Concept mining, Distributional semantics, Graph processing
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-249992 (URN)10.1007/s10115-016-0984-2 (DOI)000399408200007 ()2-s2.0-84984793995 (Scopus ID)
Prosjekter
E2ECLOUDS
Forskningsfinansiär
Swedish Foundation for Strategic Research , RIT10-0043
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-25 Laget: 2019-04-25 Sist oppdatert: 2022-06-26bibliografisk kontrollert
2. Knowing an Object by the Company It Keeps: A Domain-Agnostic Scheme for Similarity Discovery
Åpne denne publikasjonen i ny fane eller vindu >>Knowing an Object by the Company It Keeps: A Domain-Agnostic Scheme for Similarity Discovery
2015 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Appropriately defining and then efficiently calculating similarities from large data sets are often essential in data mining, both for building tractable representations and for gaining understanding of data and generating processes. Here we rely on the premise that given a set of objects and their correlations, each object is characterized by its context, i.e. its correlations to the other objects, and that the similarity between two objects therefore can be expressed in terms of the similarity between their respective contexts. Resting on this principle, we propose a data-driven and highly scalable approach for discovering similarities from large data sets by representing objects and their relations as a correlation graph that is transformed to a similarity graph. Together these graphs can express rich structural properties among objects. Specifically, we show that concepts -- representations of abstract ideas and notions -- are constituted by groups of similar objects that can be identified by clustering the objects in the similarity graph. These principles and methods are applicable in a wide range of domains, and will here be demonstrated for three distinct types of objects: codons, artists and words, where the numbers of objects and correlations range from small to very large.

HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-250036 (URN)10.1109/ICDM.2015.85 (DOI)000380541000013 ()2-s2.0-84963516560 (Scopus ID)978-1-4673-9504-5 (ISBN)
Konferanse
2015 IEEE International Conference on Data Mining (ICDM)
Forskningsfinansiär
Swedish Foundation for Strategic Research , RIT10-0043
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-25 Laget: 2019-04-25 Sist oppdatert: 2022-06-26bibliografisk kontrollert
3. BoostVHT: Boosting distributed streaming decision trees
Åpne denne publikasjonen i ny fane eller vindu >>BoostVHT: Boosting distributed streaming decision trees
2017 (engelsk)Inngår i: International Conference on Information and Knowledge Management, Proceedings, Association for Computing Machinery , 2017, s. 899-908Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Online boosting improves the accuracy of classifiers for unbounded streams of data by chaining them into an ensemble. Due to its sequential nature, boosting has proven hard to parallelize, even more so in the online setting. This paper introduces BoostVHT, a technique to parallelize online boosting algorithms. Our proposal leverages a recently-developed model-parallel learning algorithm for streaming decision trees as a base learner. This design allows to neatly separate the model boosting from its training. As a result, BoostVHT provides a flexible learning framework which can employ any existing online boosting algorithm, while at the same time it can leverage the computing power of modern parallel and distributed cluster environments. We implement our technique on Apache SAMOA, an open-source platform for mining big data streams that can be run on several distributed execution engines, and demonstrate order of magnitude speedups compared to the state-of-the-art.

sted, utgiver, år, opplag, sider
Association for Computing Machinery, 2017
Emneord
Boosting, Decision trees, Distributed systems, Online learning, Big data, Cluster computing, Clustering algorithms, Data mining, Distributed computer systems, Forestry, Knowledge management, Online systems, Trees (mathematics), Distributed clusters, Distributed streaming, Flexible Learning, Open source platforms, Parallel learning algorithms, Learning algorithms
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-227056 (URN)10.1145/3132847.3132974 (DOI)000440845300089 ()2-s2.0-85037345394 (Scopus ID)9781450349185 (ISBN)
Konferanse
26th ACM International Conference on Information and Knowledge Management, CIKM 2017, 6 November 2017 through 10 November 2017
Merknad

QC 20180503

Tilgjengelig fra: 2018-05-03 Laget: 2018-05-03 Sist oppdatert: 2022-06-26bibliografisk kontrollert
4. Block-distributed Gradient Boosted Trees
Åpne denne publikasjonen i ny fane eller vindu >>Block-distributed Gradient Boosted Trees
2019 (engelsk)Inngår i: SIGIR 2019 - Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Association for Computing Machinery (ACM), 2019, s. 1025-1028Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The Gradient Boosted Tree (GBT) algorithm is one of the most popular machine learning algorithms used in production, for tasks that include Click-Through Rate (CTR) prediction and learning-to-rank. To deal with the massive datasets available today, many distributed GBT methods have been proposed. However, they all assume a row-distributed dataset, addressing scalability only with respect to the number of data points and not the number of features, and increasing communication cost for high-dimensional data. In order to allow for scalability across both the data point and feature dimensions, and reduce communication cost, we propose block-distributed GBTs. We achieve communication efficiency by making full use of the data sparsity and adapting the Quickscorer algorithm to the block-distributed setting. We evaluate our approach using datasets with millions of features, and demonstrate that we are able to achieve multiple orders of magnitude reduction in communication cost for sparse data, with no loss in accuracy, while providing a more scalable design. As a result, we are able to reduce the training time for high-dimensional data, and allow more cost-effective scale-out without the need for expensive network communication.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2019
Emneord
Gradient Boosted Trees, Distributed Systems, Communication Efficiency, Scalability
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-249994 (URN)10.1145/3331184.3331331 (DOI)000501488900130 ()2-s2.0-85073774997 (Scopus ID)978-1-4503-6172-9 (ISBN)
Konferanse
42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019; Museum of Science and Industry, Paris; France; 21 July 2019 through 25 July 2019
Forskningsfinansiär
Swedish Foundation for Strategic Research , BD15-0006
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-25 Laget: 2019-04-25 Sist oppdatert: 2022-12-12bibliografisk kontrollert
5. Predicting Session Length in Media Streaming
Åpne denne publikasjonen i ny fane eller vindu >>Predicting Session Length in Media Streaming
2017 (engelsk)Inngår i: SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Association for Computing Machinery (ACM), 2017, s. 977-980Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Session length is a very important aspect in determining a user's satisfaction with a media streaming service. Being able to predict how long a session will last can be of great use for various downstream tasks, such as recommendations and ad scheduling. Most of the related literature on user interaction duration has focused on dwell time for websites, usually in the context of approximating post-click satisfaction either in search results, or display ads.

In this work we present the first analysis of session length in a mobile-focused online service, using a real world data-set from a major music streaming service. We use survival analysis techniques to show that the characteristics of the length distributions can differ significantly between users, and use gradient boosted trees with appropriate objectives to predict the length of a session using only information available at its beginning. Our evaluation on real world data illustrates that our proposed technique outperforms the considered baseline.

sted, utgiver, år, opplag, sider
Association for Computing Machinery (ACM), 2017
Emneord
User Behavior, Survival Analysis, Dwell Time, Session Length
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-250037 (URN)10.1145/3077136.3080695 (DOI)000454711900127 ()2-s2.0-85029395373 (Scopus ID)978-1-4503-5022-8 (ISBN)
Konferanse
40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017; Tokyo, Shinjuku; Japan; 7 August 2017 through 11 August 2017
Forskningsfinansiär
Swedish Foundation for Strategic Research , RIT10-0043
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-25 Laget: 2019-04-25 Sist oppdatert: 2022-06-26bibliografisk kontrollert
6. Quantifying Uncertainty in Online Regression Forests
Åpne denne publikasjonen i ny fane eller vindu >>Quantifying Uncertainty in Online Regression Forests
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Abstract [en]

Accurately quantifying uncertainty in predictions is essential for the deployment of machine learning algorithms in critical applications where mistakes are costly. Most approaches to quantifying prediction uncertainty have focused on settings where the data is static, or bounded. In this paper, we investigate methods that quantify the prediction uncertainty in a streaming setting, where the data is potentially unbounded.

We propose two meta-algorithms that produce prediction intervals for online regression forests of arbitrary tree models; one based on conformal prediction theory, and the other based on quantile regression. We show that the approaches are able to maintain specified error rates, with constant computational cost per example and bounded memory usage. We provide empirical evidence that the methods outperform the state-of-the-art in terms of maintaining error guarantees, while being an order of magnitude faster. We also investigate how the algorithms are able to recover from concept drift.

Emneord
Online learning, Uncertainty, Decision Trees, Regression
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-250039 (URN)
Forskningsfinansiär
Swedish Foundation for Strategic Research , BD15-0006
Merknad

QC 20190426

Tilgjengelig fra: 2019-04-25 Laget: 2019-04-25 Sist oppdatert: 2022-12-12bibliografisk kontrollert

Open Access i DiVA

fulltext(4379 kB)984 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 4379 kBChecksum SHA-512
ab2ff5aa0ab30acad6dc0fd930ddcfafbc0413eede8a906e659cc51eac88ebd62ea781936cb55f455427d99a390bd2a2382d94dc69f4677cbd912a9d0f74c7e6
Type fulltextMimetype application/pdf

Søk i DiVA

Av forfatter/redaktør
Vasiloudis, Theodore
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 985 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 4224 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf