kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Maggy: Scalable Asynchronous Parallel Hyperparameter Search
Logical Clocks AB, Stockholm, Sweden.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-7236-4637
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-2748-8929
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-6779-7435
Show others and affiliations
2020 (English)In: Proceedings of the 1st Workshop on Distributed Machine Learning, Association for Computing Machinery , 2020, p. 28-33Conference paper, Published paper (Refereed)
Abstract [en]

Running extensive experiments is essential for building Machine Learning (ML) models. Such experiments usually require iterative execution of many trials with varying run times. In recent years, Apache Spark has become the de-facto standard for parallel data processing in the industry, in which iterative processes are implemented within the bulk-synchronous parallel (BSP) execution model. The BSP approach is also being used to parallelize ML trials in Spark. However, the BSP task synchronization barriers prevent asynchronous execution of trials, which leads to a reduced number of trials that can be run on a given computational budget. In this paper, we introduce Maggy, an open-source framework based on Spark, to execute ML trials asynchronously in parallel, with the ability to early stop poorly performing trials. In the experiments, we compare Maggy with the BSP execution of parallel trials in Spark and show that on random hyperparameter search on a convolutional neural network for the Fashion-MNIST dataset Maggy reduces the required time to execute a fixed number of trials by 33% to 58%, without any loss in the final model accuracy.

Place, publisher, year, edition, pages
Association for Computing Machinery , 2020. p. 28-33
Keywords [en]
Scalable Hyperparameter Search, Machine Learning, Asynchronous Hyperparameter Optimization
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-287209DOI: 10.1145/3426745.3431338ISI: 000709791500005Scopus ID: 2-s2.0-85097717704OAI: oai:DiVA.org:kth-287209DiVA, id: diva2:1506931
Conference
The 1st Workshop on Distributed Machine Learning (DistributedML'20)
Note

QC 20201207

Available from: 2020-12-04 Created: 2020-12-04 Last updated: 2025-03-04Bibliographically approved
In thesis
1. Tools and Methods for Distributed and Large-Scale Training of Deep Neural Networks
Open this publication in new window or tab >>Tools and Methods for Distributed and Large-Scale Training of Deep Neural Networks
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Deep Neural Networks (DNNs) have been at the forefront of recent breakthroughs in Machine Learning (ML) and Deep Learning (DL). DNNs are increasingly used in various tasks, from Earth observation and analysis of satellite images to medical diagnosis and smart chatbots. A major contributor to these advances has been the abundance of training data, computation resources, and frameworks that enable efficient training of ever-larger and more complex DNNs, within a paradigm referred to as distributed DL, and in particular, distributed training, which is the focus of this doctoral dissertation. In distributed training, the data and computation are distributed across several workers as opposed to single-host training in which both the data and computation reside and happen on a single worker. In this setting, distributed training can help overcome the limitations of single-host training, such as memory constraints, computational bottlenecks, and data availability.

However, distributed training comes with a number of challenges that need to be carefully addressed in order to have a system that efficiently makes use of it. These challenges include, but are not limited to, efficient distribution of computation and data across the workers, the presence of straggler workers in a cluster (workers that get significantly behind in their computation step compared to the other workers), especially in synchronous execution settings, and communication and synchronization among the workers. This implies that the system should provide scalability in both the computation and the data dimensions.

On the other hand, from a programming and usability point of view, using the distributed training paradigm typically requires knowledge of distributed computing principles and experience with distributed and data-intensive computing frameworks as well as applying major changes to the code used for single-host training. Furthermore, as training a DNN involves several steps and stages (e.g., data preparation, hyperparameter tuning, model training, etc.), it would be desirable to possibly reuse the computational results of different steps in each other (e.g., reusing weights learned during hyperparameter tuning trials, for weight initialization of the model training step) in order to improve training time. Finally, when developing larger and more complex DNNs, we also need to know about each design choice's contributions.

The contributions of this doctoral dissertation address the aforementioned challenges, and collectively optimize large-scale DNN training, making it more accessible, efficient, and computationally sustainable while reducing the redundancy in ML/DL workflows, and providing usable tools for conducting ablation studies. 

Abstract [sv]

Deepa neurala nätverk (DNNs) har varit i framkant av de senaste genombrotten inom maskininlärning (ML) och djupinlärning (DL). DNN används i allt större utsträckning inom en rad olika områden, från jordobservation och analys av satellitbilder till medicinsk diagnostik och smarta chattbotar. En stor bidragande faktor till dessa framsteg är tillgången på stora mängder träningsdata, kraftfulla beräkningsresurser och ramverk som möjliggör effektiv träning av allt större och mer komplexa DNNs inom ett paradigm som kallas distribuerad DL. Inom detta område är distribuerad träning fokus för denna doctoral dissertation. I distribuerad träning fördelas data och beräkningar över flera arbetarnoder, till skillnad från träning på en enskild värd där både data och beräkningar hanteras av en enda nod. I denna kontext kan distribuerad träning bidra till att övervinna begränsningar såsom minnesbegränsningar, beräkningsflaskhalsar och begränsad datatillgång.

Distribuerad träning innebär dock flera utmaningar som måste hanteras noggrant för att säkerställa effektiv resursanvändning. Dessa utmaningar inkluderar, men är inte begränsade till, effektiv fördelning av beräkningar och data mellan noder, förekomsten av stragglers (arbetarnoder som hamnar efter i sina beräkningar jämfört med andra), särskilt i synkrona exekveringsmiljöer, samt kommunikation och synkronisering mellan noderna. För att systemet ska vara skalbart behöver det kunna hantera både ökande beräkningsbehov och större datamängder.

Ur ett programmerings- och användbarhetsperspektiv kräver distribuerad träning ofta djupgående kunskap om distribuerad beräkning och erfarenhet av dataintensiva ramverk. Dessutom innebär det ofta omfattande anpassningar av kod som används för träning på en enskild värd. Eftersom träning av en DNN innefattar flera steg och faser (t.ex. datapreparering, hyperparametertuning, modellträning etc.), vore det önskvärt att återanvända beräkningsresultat från olika steg (t.ex. vikter inlärda under hyperparametertuning för att initialisera modellträningen) för att förbättra träningseffektiviteten. Slutligen, vid utveckling av större och mer komplexa DNNs, är det också viktigt att förstå varje designvals inverkan.

Denna doctoral dissertation adresserar de ovan nämnda utmaningarna och optimerar storskalig DNN-träning genom att göra den mer tillgänglig, effektiv och beräkningsmässigt hållbar, samtidigt som redundansen i ML/DL-arbetsflöden minskas och användbara verktyg för ablationsstudier tillhandahålls.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2025. p. 47
Series
TRITA-EECS-AVL ; 2025:28
Keywords
Distributed Deep Learning, Ablation Studies, Data-parallel Training, Deep Neural Networks, Systems for Machine Learning, Weight Initialization, Hyperparameter Optimization, Distribuerad djupinlärning, Ablationsstudier, Dataparallell träning, Djupa neurala nätverk, System för maskininlärning, Viktinitialisering, Hyperparameteroptimering
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-360720 (URN)978-91-8106-214-4 (ISBN)
Public defence
2025-03-27, Zoom: https://kth-se.zoom.us/j/69403203069, Sal-A, Electrum, Kistagången 16, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
EU, Horizon 2020, 825258Vinnova, 2016–05193
Note

QC 20250304

Available from: 2025-03-04 Created: 2025-03-04 Last updated: 2025-12-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopushttps://doi.org/10.1145/3426745.3431338

Authority records

Sheikholeslami, SinaPayberah, Amir H.Vlassov, VladimirDowling, Jim

Search in DiVA

By author/editor
Sheikholeslami, SinaPayberah, Amir H.Vlassov, VladimirDowling, Jim
By organisation
Software and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 267 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf