kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Decentralized learning of randomization-based neural networks with centralized equivalence
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0003-4406-536x
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0002-8534-7622
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0002-7926-5081
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. (Digital Futures)ORCID iD: 0000-0003-2638-6047
2022 (English)In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 115, article id 108030Article in journal (Refereed) Published
Abstract [en]

We consider a decentralized learning problem where training data samples are distributed over agents (processing nodes) of an underlying communication network topology without any central (master) node. Due to information privacy and security issues in a decentralized setup, nodes are not allowed to share their training data and only parameters of the neural network are allowed to be shared. This article investigates decentralized learning of randomization-based neural networks that provides centralized equivalent performance as if the full training data are available at a single node. We consider five randomization-based neural networks that use convex optimization for learning. Two of the five neural networks are shallow, and the others are deep. The use of convex optimization is the key to apply alternating-direction-method-of-multipliers with decentralized average consensus. This helps us to establish decentralized learning with centralized equivalence. For the underlying communication network topology, we use a doubly-stochastic network policy matrix and synchronous communications. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. The performance rankings of five neural networks using Friedman rank are also enclosed in the results, which are ELM < RVFL< dRVFL < edRVFL < SSFN.

Place, publisher, year, edition, pages
Elsevier BV , 2022. Vol. 115, article id 108030
Keywords [en]
Randomized neural network, Distributed learning, Multi-layer feedforward neural network, Alternating direction method of multipliers
National Category
Telecommunications
Identifiers
URN: urn:nbn:se:kth:diva-307316DOI: 10.1016/j.asoc.2021.108030ISI: 000736977500005Scopus ID: 2-s2.0-85120883070OAI: oai:DiVA.org:kth-307316DiVA, id: diva2:1630460
Note

QC 20220120

Available from: 2022-01-20 Created: 2022-01-20 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat

Search in DiVA

By author/editor
Liang, XinyueJavid, Alireza M.Skoglund, MikaelChatterjee, Saikat
By organisation
Information Science and Engineering
In the same journal
Applied Soft Computing
Telecommunications

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 324 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf