kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
High-dimensional neural feature using rectified linear unit and random matrix instance
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-8534-7622
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0003-1285-8947
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-7926-5081
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0003-2638-6047
2020 (English)In: 2020 IEEE international conference on acoustics, speech, and signal processing, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 4237-4241Conference paper, Published paper (Refereed)
Abstract [en]

We design a ReLU-based multilayer neural network to generate a rich high-dimensional feature vector. The feature guarantees a monotonically decreasing training cost as the number of layers increases. We design the weight matrix in each layer to extend the feature vectors to a higher dimensional space while providing a richer representation in the sense of training cost. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An l(2)-norm convex constraint is used in the minimization to improve the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020. p. 4237-4241
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords [en]
Rectified linear unit, random matrix, convex cost function
National Category
Control Engineering Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-292709DOI: 10.1109/ICASSP40776.2020.9054736ISI: 000615970404097Scopus ID: 2-s2.0-85091295366OAI: oai:DiVA.org:kth-292709DiVA, id: diva2:1544039
Conference
IEEE International Conference on Acoustics, Speech, and Signal Processing, MAY 04-08, 2020, Barcelona, SPAIN
Note

QC 20210710

Available from: 2021-04-14 Created: 2021-04-14 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Javid, Alireza M.Venkitaraman, ArunSkoglund, MikaelChatterjee, Saikat

Search in DiVA

By author/editor
Javid, Alireza M.Venkitaraman, ArunSkoglund, MikaelChatterjee, Saikat
By organisation
Information Science and Engineering
Control EngineeringComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 37 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf