kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Neural Greedy Pursuit for Feature Selection
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-8534-7622
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
Weizmann Inst Sci, Math & Comp Sci, Rehovot, Israel..
Show others and affiliations
2022 (English)In: 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper, Published paper (Refereed)
Abstract [en]

We propose a greedy algorithm to select N important features among P input features for a non-linear prediction problem. The features are selected one by one sequentially, in an iterative loss minimization procedure. We use neural networks as predictors in the algorithm to compute the loss and hence, we refer to our method as neural greedy pursuit (NGP). NGP is efficient in selecting N features when N << P, and it provides a notion of feature importance in a descending order following the sequential selection procedure. We experimentally show that NGP provides better performance than several feature selection methods such as DeepLIFT and Drop-one-out loss. In addition, we experimentally show a phase transition behavior in which perfect selection of all N features without false positives is possible when the training data size exceeds a threshold.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2022.
Series
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
Keywords [en]
Feature selection, Deep learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-323022DOI: 10.1109/IJCNN55064.2022.9892946ISI: 000867070908056Scopus ID: 2-s2.0-85140774694OAI: oai:DiVA.org:kth-323022DiVA, id: diva2:1725934
Conference
IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / IEEE World Congress on Computational Intelligence (IEEE WCCI) / International Joint Conference on Neural Networks (IJCNN) / IEEE Congress on Evolutionary Computation (IEEE CEC), JUL 18-23, 2022, Padua, ITALY
Note

Part of proceedings: ISBN 978-1-7281-8671-9

QC 20230112

Available from: 2023-01-12 Created: 2023-01-12 Last updated: 2023-01-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Das, SandipanJavid, Alireza M.Borpatra Gohain, PrakashChatterjee, Saikat

Search in DiVA

By author/editor
Das, SandipanJavid, Alireza M.Borpatra Gohain, PrakashChatterjee, Saikat
By organisation
Information Science and Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 65 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf