kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Algebra, Combinatorics and Topology.ORCID iD: 0009-0004-8248-229X
Qualcomm AI Research Amsterdam, The Netherlands.
Qualcomm AI Research Amsterdam, The Netherlands.
Qualcomm AI Research Amsterdam, The Netherlands.
2025 (English)In: Transactions on Machine Learning Research, E-ISSN 2835-8856, Vol. 2025-February, p. 1-22Article in journal (Refereed) Epub ahead of print
Abstract [en]

Lattice reduction is a combinatorial optimization problem aimed at finding the most orthogonal basis in a given lattice. The Lenstra–Lenstra–Lovász (LLL) algorithm is the best algorithm in the literature for solving this problem. In light of recent research on algorithm discovery, in this work, we would like to answer this question: is it possible to parametrize the algorithm space for lattice reduction problem with neural networks and find an algorithm without supervised data? Our strategy is to use equivariant and invariant parametrizations and train in a self-supervised way. We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases. We incorporate the symmetries of lattice reduction into the model by making it invariant to isometries and scaling of the ambient space and equivariant with respect to the hyperocrahedral group permuting and flipping the lattice basis elements. We show that this approach yields an algorithm with comparable complexity and performance to the LLL algorithm on a set of benchmarks. Additionally, motivated by certain applications for wire-less communication, we extend our method to a convolutional architecture which performs joint reduction of spatially-correlated lattices arranged in a grid, thereby amortizing its cost over multiple lattices.

Place, publisher, year, edition, pages
Transactions on Machine Learning Research , 2025. Vol. 2025-February, p. 1-22
National Category
Signal Processing Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-361196Scopus ID: 2-s2.0-85219517527OAI: oai:DiVA.org:kth-361196DiVA, id: diva2:1944151
Note

QC 20250313

Available from: 2025-03-12 Created: 2025-03-12 Last updated: 2025-03-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Scopusfulltext (OpenReview)

Authority records

Marchetti, Giovanni Luca

Search in DiVA

By author/editor
Marchetti, Giovanni Luca
By organisation
Algebra, Combinatorics and Topology
In the same journal
Transactions on Machine Learning Research
Signal ProcessingComputational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 50 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf