Open this publication in new window or tab >>Show others...
2024 (English)In: International Conference on Machine Learning, ICML 2024, ML Research Press , 2024, p. 38237-38252Conference paper, Published paper (Refereed)
Abstract [en]
Feature selection is a crucial task in settings where data is high-dimensional or acquiring the full set of features is costly. Recent developments in neural network-based embedded feature selection show promising results across a wide range of applications. Concrete Autoencoders (CAEs), considered state-of-the-art in embedded feature selection, may struggle to achieve stable joint optimization, hurting their training time and generalization. In this work, we identify that this instability is correlated with the CAE learning duplicate selections. To remedy this, we propose a simple and effective improvement: Indirectly Parameterized CAEs (IP-CAEs). IP-CAEs learn an embedding and a mapping from it to the Gumbel-Softmax distributions' parameters. Despite being simple to implement, IP-CAE exhibits significant and consistent improvements over CAE in both generalization and training time across several datasets for reconstruction and classification. Unlike CAE, IP-CAE effectively leverages non-linear relationships and does not require retraining the jointly optimized decoder. Furthermore, our approach is, in principle, generalizable to Gumbel-Softmax distributions beyond feature selection.
Place, publisher, year, edition, pages
ML Research Press, 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-353956 (URN)2-s2.0-85203808876 (Scopus ID)
Conference
41st International Conference on Machine Learning, ICML 2024, Vienna, Austria, Jul 21 2024 - Jul 27 2024
Note
QC 20240926
2024-09-252024-09-252024-09-26Bibliographically approved