kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
DISTRIBUTED LARGE NEURAL NETWORK WITH CENTRALIZED EQUIVALENCE
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-7926-5081
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0003-2638-6047
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), IEEE, 2018, p. 2976-2980Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we develop a distributed algorithm for learning a large neural network that is deep and wide. We consider a scenario where the training dataset is not available in a single processing node, but distributed among several nodes. We show that a recently proposed large neural network architecture called progressive learning network (PLN) can be trained in a distributed setup with centralized equivalence. That means we would get the same result if the data be available in a single node. Using a distributed convex optimization method called alternating-direction-method-of-multipliers (ADMM), we perform training of PLN in the distributed setup.

Place, publisher, year, edition, pages
IEEE, 2018. p. 2976-2980
Keywords [en]
Distributed learning, neural networks, data parallelism, convex optimization
National Category
Communication Systems
Identifiers
URN: urn:nbn:se:kth:diva-237152DOI: 10.1109/ICASSP.2018.8462179ISI: 000446384603029Scopus ID: 2-s2.0-85054237028OAI: oai:DiVA.org:kth-237152DiVA, id: diva2:1258546
Conference
2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Note

QC 20181025

Available from: 2018-10-25 Created: 2018-10-25 Last updated: 2022-06-26Bibliographically approved
In thesis
1. Decentralized Learning of Randomization-based Neural Networks
Open this publication in new window or tab >>Decentralized Learning of Randomization-based Neural Networks
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Machine learning and artificial intelligence have been wildly explored and developed very fast to adapt to the expanding need for almost every aspect of human development. When stepping into the big data era, siloed data localization has become a big challenge for machine learning. Restricted by scattered locations and privacy regulations of information sharing, recent studies aim to develop collaborated machine learning techniques for local models to approximate the centralized performance without sharing real data. Privacy preservation is as important as the model performance and the model complexity. This thesis aims to investigate the scopes of the low computational complexity learning model, randomization-based feed-forward neural networks (RFNs). As a class of artificial neural networks (ANNs), RFNs enjoy the favorable balance between low computational complexity and satisfying performance, especially for non-image data. Driven by the advantages of RFNs and the need for distributed learning resolutions, we aim to study the potential and applicability of RFNs and distributed optimization methods that may lead to the design of the decentralized variant of RFNs to deliver desired results.

Firstly, we provide the decentralized learning algorithms based on RFN architectures for undirected network topology using synchronous communication. We investigate decentralized learning of five RFNs that provides centralized equivalent performance as if the total training data samples are available at a single node. Two of the five neural networks are shallow, and the others are deep. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. 

Then we are motivated to design an asynchronous decentralized learning application that achieves centralized equivalent performance with low computational complexity and communication overhead. We propose an asynchronous decentralized learning algorithm using ARock-based ADMM to realize the decentralized variants of a variety of RFNs. The proposed algorithm enables single node activation and one-sided communication in an undirected communication network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm obtains the centralized solution with reduced computational cost and improved communication efficiency. 

Finally, We consider the problem of training a neural net over a decentralized scenario with a high sparsity level in connections. The issue is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting.' While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them, and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario.

Abstract [sv]

Maskininlärning och artificiell intelligens har utforskats vilt och utvecklats mycket snabbt för att anpassa sig till det växande behovet av nästan alla aspekter av mänsklig utveckling. När man går in i big data-eran har lokaliserad datalokalisering blivit en stor utmaning för maskininlärning. Begränsat av spridda platser och sekretessregler för informationsdelning, syftar nya studier till att utveckla samarbetade maskininlärningstekniker för lokala modeller för att approximera den centraliserade prestandan utan att dela verkliga data. Sekretessbevarande är lika viktigt som modellens prestanda och modellens komplexitet. Denna avhandling syftar till att undersöka omfattningen av den inlärningsmodell med låg beräkningskomplexitet, randomiseringsbaserade feed-forward neurala nätverk (RFN). Som en klass av artificiella neurala nätverk (ANN) har RFN: er den gynnsamma balansen mellan låg beräkningskomplexitet och tillfredsställande prestanda, särskilt för icke-bilddata. Drivs av RFN: s fördelar och behovet av distribuerade inlärningsupplösningar, syftar vi till att studera RFN: s potential och användbarhet och distribuerade optimeringsmetoder som kan leda till utformningen av den decentraliserade varianten av RFN för att leverera önskade resultat.

För det första tillhandahåller vi de decentraliserade inlärningsalgoritmerna baserade på RFN-arkitekturer för oriktad nätverkstopologi med synkron kommunikation. Vi undersöker decentraliserad inlärning av fem RFN som ger centraliserad ekvivalent prestanda som om de totala träningsdataproverna är tillgängliga i en enda nod. Två av de fem neurala nätverken är grunda, och de andra är djupa. Experiment med nio benchmarkdatauppsättningar visar att de fem neurala nätverken ger bra prestanda samtidigt som de kräver låg beräknings- och kommunikationskomplexitet för decentraliserat lärande.

Då är vi motiverade att designa en asynkron decentraliserad inlärningsapplikation som uppnår central motsvarande prestanda med låg beräkningskomplexitet och kommunikationsomkostnader. Vi föreslår en asynkron decentraliserad inlärningsalgoritm med ARock-baserad ADMM för att förverkliga de decentraliserade varianterna av en mängd olika RFN. Den föreslagna algoritmen möjliggör aktivering av enstaka noder och ensidig kommunikation i ett oriktat kommunikationsnätverk, kännetecknat av en dubbelstokastisk nätverkspolitisk matris. Dessutom erhåller den föreslagna algoritmen den centraliserade lösningen med minskad beräkningskostnad och förbättrad kommunikationseffektivitet.

Slutligen betraktar vi problemet med att träna ett neuralt nät över ett decentraliserat scenario med hög sparsitetsnivå i anslutningar. Frågan hanteras genom att anpassa en nyligen föreslagen inkrementell inlärningsmetod, kallad 'lärande utan att glömma.' Medan en inkrementell inlärningsmetod antar datatillgänglighet i en sekvens, kan noder i det decentraliserade scenariot inte dela data mellan dem, och det finns ingen masternod. Noder kan kommunicera information om modellparametrar bland grannar. Kommunikation av modellparametrar är nyckeln till att anpassa inlärningsmetoden till det decentraliserade scenariot.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2021
Series
TRITA-EECS-AVL ; 2021:40
National Category
Communication Systems Telecommunications
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-295433 (URN)978-91-7873-904-2 (ISBN)
Public defence
2021-06-11, https://kth-se.zoom.us/j/64005034683, U1, Brinellvägen 28A, Undervisningshuset, våningsplan 6, KTH Campus, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20210520

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2022-07-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopusconference

Authority records

Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat

Search in DiVA

By author/editor
Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat
By organisation
Information Science and Engineering
Communication Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 462 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf