kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning without Forgetting for Decentralized Neural Nets with Low Communication Overhead
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0003-4406-536x
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-8534-7622
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-7926-5081
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0003-2638-6047
2021 (English)In: 2020 28th European Signal Processing Conference (EUSIPCO), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 2185-2189Conference paper, Published paper (Refereed)
Abstract [en]

We consider the problem of training a neural net over a decentralized scenario with a low communication over-head. The problem is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting'. While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario. We use random walk based communication to handle a highly limited communication resource.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2021. p. 2185-2189
Keywords [en]
Decentralized learning, feedforward neural net, learning without forgetting, low communication overhead
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-295432DOI: 10.23919/Eusipco47968.2020.9287777ISI: 000632622300440Scopus ID: 2-s2.0-85099303579OAI: oai:DiVA.org:kth-295432DiVA, id: diva2:1556165
Conference
28th European Signal Processing Conference (EUSIPCO), Amsterdam
Note

QC 20210621

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2022-06-25Bibliographically approved
In thesis
1. Decentralized Learning of Randomization-based Neural Networks
Open this publication in new window or tab >>Decentralized Learning of Randomization-based Neural Networks
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Machine learning and artificial intelligence have been wildly explored and developed very fast to adapt to the expanding need for almost every aspect of human development. When stepping into the big data era, siloed data localization has become a big challenge for machine learning. Restricted by scattered locations and privacy regulations of information sharing, recent studies aim to develop collaborated machine learning techniques for local models to approximate the centralized performance without sharing real data. Privacy preservation is as important as the model performance and the model complexity. This thesis aims to investigate the scopes of the low computational complexity learning model, randomization-based feed-forward neural networks (RFNs). As a class of artificial neural networks (ANNs), RFNs enjoy the favorable balance between low computational complexity and satisfying performance, especially for non-image data. Driven by the advantages of RFNs and the need for distributed learning resolutions, we aim to study the potential and applicability of RFNs and distributed optimization methods that may lead to the design of the decentralized variant of RFNs to deliver desired results.

Firstly, we provide the decentralized learning algorithms based on RFN architectures for undirected network topology using synchronous communication. We investigate decentralized learning of five RFNs that provides centralized equivalent performance as if the total training data samples are available at a single node. Two of the five neural networks are shallow, and the others are deep. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. 

Then we are motivated to design an asynchronous decentralized learning application that achieves centralized equivalent performance with low computational complexity and communication overhead. We propose an asynchronous decentralized learning algorithm using ARock-based ADMM to realize the decentralized variants of a variety of RFNs. The proposed algorithm enables single node activation and one-sided communication in an undirected communication network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm obtains the centralized solution with reduced computational cost and improved communication efficiency. 

Finally, We consider the problem of training a neural net over a decentralized scenario with a high sparsity level in connections. The issue is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting.' While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them, and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario.

Abstract [sv]

Maskininlärning och artificiell intelligens har utforskats vilt och utvecklats mycket snabbt för att anpassa sig till det växande behovet av nästan alla aspekter av mänsklig utveckling. När man går in i big data-eran har lokaliserad datalokalisering blivit en stor utmaning för maskininlärning. Begränsat av spridda platser och sekretessregler för informationsdelning, syftar nya studier till att utveckla samarbetade maskininlärningstekniker för lokala modeller för att approximera den centraliserade prestandan utan att dela verkliga data. Sekretessbevarande är lika viktigt som modellens prestanda och modellens komplexitet. Denna avhandling syftar till att undersöka omfattningen av den inlärningsmodell med låg beräkningskomplexitet, randomiseringsbaserade feed-forward neurala nätverk (RFN). Som en klass av artificiella neurala nätverk (ANN) har RFN: er den gynnsamma balansen mellan låg beräkningskomplexitet och tillfredsställande prestanda, särskilt för icke-bilddata. Drivs av RFN: s fördelar och behovet av distribuerade inlärningsupplösningar, syftar vi till att studera RFN: s potential och användbarhet och distribuerade optimeringsmetoder som kan leda till utformningen av den decentraliserade varianten av RFN för att leverera önskade resultat.

För det första tillhandahåller vi de decentraliserade inlärningsalgoritmerna baserade på RFN-arkitekturer för oriktad nätverkstopologi med synkron kommunikation. Vi undersöker decentraliserad inlärning av fem RFN som ger centraliserad ekvivalent prestanda som om de totala träningsdataproverna är tillgängliga i en enda nod. Två av de fem neurala nätverken är grunda, och de andra är djupa. Experiment med nio benchmarkdatauppsättningar visar att de fem neurala nätverken ger bra prestanda samtidigt som de kräver låg beräknings- och kommunikationskomplexitet för decentraliserat lärande.

Då är vi motiverade att designa en asynkron decentraliserad inlärningsapplikation som uppnår central motsvarande prestanda med låg beräkningskomplexitet och kommunikationsomkostnader. Vi föreslår en asynkron decentraliserad inlärningsalgoritm med ARock-baserad ADMM för att förverkliga de decentraliserade varianterna av en mängd olika RFN. Den föreslagna algoritmen möjliggör aktivering av enstaka noder och ensidig kommunikation i ett oriktat kommunikationsnätverk, kännetecknat av en dubbelstokastisk nätverkspolitisk matris. Dessutom erhåller den föreslagna algoritmen den centraliserade lösningen med minskad beräkningskostnad och förbättrad kommunikationseffektivitet.

Slutligen betraktar vi problemet med att träna ett neuralt nät över ett decentraliserat scenario med hög sparsitetsnivå i anslutningar. Frågan hanteras genom att anpassa en nyligen föreslagen inkrementell inlärningsmetod, kallad 'lärande utan att glömma.' Medan en inkrementell inlärningsmetod antar datatillgänglighet i en sekvens, kan noder i det decentraliserade scenariot inte dela data mellan dem, och det finns ingen masternod. Noder kan kommunicera information om modellparametrar bland grannar. Kommunikation av modellparametrar är nyckeln till att anpassa inlärningsmetoden till det decentraliserade scenariot.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2021
Series
TRITA-EECS-AVL ; 2021:40
National Category
Communication Systems Telecommunications
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-295433 (URN)978-91-7873-904-2 (ISBN)
Public defence
2021-06-11, https://kth-se.zoom.us/j/64005034683, U1, Brinellvägen 28A, Undervisningshuset, våningsplan 6, KTH Campus, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20210520

Available from: 2021-05-20 Created: 2021-05-20 Last updated: 2022-07-08Bibliographically approved

Open Access in DiVA

fulltext(2305 kB)308 downloads
File information
File name FULLTEXT01.pdfFile size 2305 kBChecksum SHA-512
edce9878f95eab691b16aebd8767b7d4d28406b74ef94e0b6614ddb69c579c067b90a7d337a25de9d35a76bf4e445101828180467dba666c8f30861cde3e6cca
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat

Search in DiVA

By author/editor
Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat
By organisation
Information Science and Engineering
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 308 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 237 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf