Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
On Symmetries and Metrics in Geometric Inference
KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
2024 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Spaces of data naturally carry intrinsic geometry. Statistics and machine learning can leverage on this rich structure in order to achieve efficiency and semantic generalization. Extracting geometry from data is therefore a fundamental challenge which by itself defines a statistical, computational and unsupervised learning problem. To this end, symmetries and metrics are two fundamental objects which are ubiquitous in continuous and discrete geometry. Both are suitable for data-driven approaches since symmetries arise as interactions and are thus collectable in practice while metrics can be induced locally from the ambient space. In this thesis, we address the question of extracting geometry from data by leveraging on symmetries and metrics. Additionally, we explore methods for statistical inference exploiting the extracted geometric structure. On the metric side, we focus on Voronoi tessellations and Delaunay triangulations, which are classical tools in computational geometry. Based on them, we propose novel non-parametric methods for machine learning and statistics, focusing on theoretical and computational aspects. These methods include an active version of the nearest neighbor regressor as well as two high-dimensional density estimators. All of them possess convergence guarantees due to the adaptiveness of Voronoi cells. On the symmetry side, we focus on representation learning in the context of data acted upon by a group. Specifically, we propose a method for learning equivariant representations which are guaranteed to be isomorphic to the data space, even in the presence of symmetries stabilizing data. We additionally explore applications of such representations in a robotics context, where symmetries correspond to actions performed by an agent. Lastly, we provide a theoretical analysis of invariant neural networks and show how the group-theoretical Fourier transform emerges in their weights. This addresses the problem of symmetry discovery in a self-supervised manner.  

Abstract [sv]

Datamängder innehar en naturlig inneboende geometri. Statistik och maskininlärning kan dra nytta av denna rika struktur för att uppnå effektivitet och semantisk generalisering. Att extrahera geometri ifrån data är därför en grundläggande utmaning som i sig definierar ett statistiskt, beräkningsmässigt och oövervakat inlärningsproblem. För detta ändamål är symmetrier och metriker två grundläggande objekt som är allestädes närvarande i kontinuerlig och diskret geometri. Båda är lämpliga för datadrivna tillvägagångssätt eftersom symmetrier uppstår som interaktioner och är därmed i praktiken samlingsbara medan metriker kan induceras lokalt ifrån det omgivande rummet. I denna avhandling adresserar vi frågan om att extrahera geometri ifrån data genom att utnyttja symmetrier och metriker. Dessutom utforskar vi metoder för statistisk inferens som utnyttjar den extraherade geometriska strukturen. På den metriska sidan fokuserar vi på Voronoi-tessellationer och Delaunay-trianguleringar, som är klassiska verktyg inom beräkningsgeometri. Baserat på dem föreslår vi nya icke-parametriska metoder för maskininlärning och statistik, med fokus på teoretiska och beräkningsmässiga aspekter. Dessa metoder inkluderar en aktiv version av närmaste grann-regressorn samt två högdimensionella täthetsskattare. Alla dessa besitter konvergensgarantier på grund av Voronoi-cellernas anpassningsbarhet. På symmetrisidan fokuserar vi på representationsinlärning i sammanhanget av data som påverkas av en grupp. Specifikt föreslår vi en metod för att lära sig ekvivarianta representationer som garanteras vara isomorfa till datarummet, även i närvaro av symmetrier som stabiliserar data. Vi utforskar även tillämpningar av sådana representationer i ett robotiksammanhang, där symmetrier motsvarar handlingar utförda av en agent. Slutligen tillhandahåller vi en teoretisk analys av invarianta neuronnät och visar hur den gruppteoretiska Fouriertransformen framträder i deras vikter. Detta adresserar problemet med att upptäcka symmetrier på ett självövervakat sätt.

sted, utgiver, år, opplag, sider
KTH Royal Institute of Technology, 2024. , s. 61
Serie
TRITA-EECS-AVL ; 2024:26
Emneord [en]
Machine Learning, Computational Geometry, Voronoi, Delaunay, Symmetry, Equivariance
HSV kategori
Forskningsprogram
Datalogi
Identifikatorer
URN: urn:nbn:se:kth:diva-344129ISBN: 978-91-8040-864-6 (tryckt)OAI: oai:DiVA.org:kth-344129DiVA, id: diva2:1842047
Disputas
2024-04-09, https://kth-se.zoom.us/j/61437033234?pwd=dnBpMnYyaDVWWC95RHNTakNXWkNRQT09, F3 (Flodis) Lindstedtsvägen 26, Stockholm, 09:00 (engelsk)
Opponent
Veileder
Merknad

QC 20240304

Tilgjengelig fra: 2024-03-04 Laget: 2024-03-02 Sist oppdatert: 2025-05-23bibliografisk kontrollert
Delarbeid
1. Active Nearest Neighbor Regression Through Delaunay Refinement
Åpne denne publikasjonen i ny fane eller vindu >>Active Nearest Neighbor Regression Through Delaunay Refinement
Vise andre…
2022 (engelsk)Inngår i: Proceedings of the 39th International Conference on Machine Learning, MLResearch Press , 2022, Vol. 162, s. 11650-11664Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We introduce an algorithm for active function approximation based on nearest neighbor regression. Our Active Nearest Neighbor Regressor (ANNR) relies on the Voronoi-Delaunay framework from computational geometry to subdivide the space into cells with constant estimated function value and select novel query points in a way that takes the geometry of the function graph into account. We consider the recent state-of-the-art active function approximator called DEFER, which is based on incremental rectangular partitioning of the space, as the main baseline. The ANNR addresses a number of limitations that arise from the space subdivision strategy used in DEFER. We provide a computationally efficient implementation of our method, as well as theoretical halting guarantees. Empirical results show that ANNR outperforms the baseline for both closed-form functions and real-world examples, such as gravitational wave parameter inference and exploration of the latent space of a generative model.

sted, utgiver, år, opplag, sider
MLResearch Press, 2022
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 162
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-319194 (URN)000900064901033 ()2-s2.0-85163127180 (Scopus ID)
Konferanse
39th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 17-23 July, 2022
Merknad

QC 20230509

Tilgjengelig fra: 2022-09-28 Laget: 2022-09-28 Sist oppdatert: 2024-03-02bibliografisk kontrollert
2. Voronoi Density Estimator for High-Dimensional Data: Computation, Compactification and Convergence
Åpne denne publikasjonen i ny fane eller vindu >>Voronoi Density Estimator for High-Dimensional Data: Computation, Compactification and Convergence
Vise andre…
2022 (engelsk)Inngår i: Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR , 2022, Vol. 180, s. 1644-1653Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The Voronoi Density Estimator (VDE) is an established density estimation technique that adapts to the local geometry of data. However, its applicability has been so far limited to problems in two and three dimensions. This is because Voronoi cells rapidly increase in complexity as dimensions grow, making the necessary explicit computations infeasible. We define a variant of the VDE deemed Compactified Voronoi Density Estimator (CVDE), suitable for higher dimensions. We propose computationally efficient algorithms for numerical approximation of the CVDE and formally prove convergence of the estimated density to the original one. We implement and empirically validate the CVDE through a comparison with the Kernel Density Estimator (KDE). Our results indicate that the CVDE outperforms the KDE on sound and image data.

sted, utgiver, år, opplag, sider
PMLR, 2022
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-319195 (URN)2-s2.0-85163412377 (Scopus ID)
Konferanse
The 38th Conference on Uncertainty in Artificial Intelligence, Eindhoven, The Netherlands, Aug 1-5 2022
Merknad

QC 20221003

Tilgjengelig fra: 2022-09-28 Laget: 2022-09-28 Sist oppdatert: 2024-07-23bibliografisk kontrollert
3. An Efficient and Continuous Voronoi Density Estimator
Åpne denne publikasjonen i ny fane eller vindu >>An Efficient and Continuous Voronoi Density Estimator
Vise andre…
2023 (engelsk)Inngår i: Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023, ML Research Press , 2023, s. 4732-4744Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We introduce a non-parametric density estimator deemed Radial Voronoi Density Estimator (RVDE). RVDE is grounded in the geometry of Voronoi tessellations and as such benefits from local geometric adaptiveness and broad convergence properties. Due to its radial definition RVDE is continuous and computable in linear time with respect to the dataset size. This amends for the main shortcomings of previously studied VDEs, which are highly discontinuous and computationally expensive. We provide a theoretical study of the modes of RVDE as well as an empirical investigation of its performance on high-dimensional data. Results show that RVDE outperforms other non-parametric density estimators, including recently introduced VDEs.

sted, utgiver, år, opplag, sider
ML Research Press, 2023
Serie
Proceedings of Machine Learning Research, ISSN 2640-3498, ; 206
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-334436 (URN)001222727704044 ()2-s2.0-85165187458 (Scopus ID)
Konferanse
26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023, Valencia, Spain, Apr 25 2023 - Apr 27 2023
Merknad

QC 20241204

Tilgjengelig fra: 2023-08-21 Laget: 2023-08-21 Sist oppdatert: 2025-02-07bibliografisk kontrollert
4. Equivariant Representation Learning via Class-Pose Decomposition
Åpne denne publikasjonen i ny fane eller vindu >>Equivariant Representation Learning via Class-Pose Decomposition
2023 (engelsk)Inngår i: Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023, ML Research Press , 2023, Vol. 206, s. 4745-4756Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We introduce a general method for learning representations that are equivariant to symmetries of data. Our central idea is to decompose the latent space into an invariant factor and the symmetry group itself. The components semantically correspond to intrinsic data classes and poses respectively. The learner is trained on a loss encouraging equivariance based on supervision from relative symmetry information. The approach is motivated by theoretical results from group theory and guarantees representations that are lossless, interpretable and disentangled. We provide an empirical investigation via experiments involving datasets with a variety of symmetries. Results show that our representations capture the geometry of data and outperform other equivariant representation learning frameworks.

sted, utgiver, år, opplag, sider
ML Research Press, 2023
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-334435 (URN)001222727704045 ()2-s2.0-85165155542 (Scopus ID)
Konferanse
26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023, Valencia, Spain, Apr 25 2023 - Apr 27 2023
Merknad

QC 20241204

Tilgjengelig fra: 2023-08-21 Laget: 2023-08-21 Sist oppdatert: 2025-02-09bibliografisk kontrollert
5. Equivariant Representation Learning in the Presence of Stabilizers
Åpne denne publikasjonen i ny fane eller vindu >>Equivariant Representation Learning in the Presence of Stabilizers
Vise andre…
2023 (engelsk)Inngår i: Machine Learning and Knowledge Discovery in Databases: Research Track - European Conference, ECML PKDD 2023, Proceedings, Springer Nature , 2023, s. 693-708Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We introduce Equivariant Isomorphic Networks (EquIN) – a method for learning representations that are equivariant with respect to general group actions over data. Differently from existing equivariant representation learners, EquIN is suitable for group actions that are not free, i.e., that stabilize data via nontrivial symmetries. EquIN is theoretically grounded in the orbit-stabilizer theorem from group theory. This guarantees that an ideal learner infers isomorphic representations while trained on equivariance alone and thus fully extracts the geometric structure of data. We provide an empirical investigation on image datasets with rotational symmetries and show that taking stabilizers into account improves the quality of the representations.

sted, utgiver, år, opplag, sider
Springer Nature, 2023
Emneord
Equivariance, Lie Groups, Representation Learning
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-339298 (URN)10.1007/978-3-031-43421-1_41 (DOI)001156141200041 ()2-s2.0-85174442272 (Scopus ID)
Konferanse
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2023, Turin, Italy, Sep 18 2023 - Sep 22 2023
Merknad

Part of ISBN 9783031434204

QC 20231106

Tilgjengelig fra: 2023-11-06 Laget: 2023-11-06 Sist oppdatert: 2024-03-04bibliografisk kontrollert
6. Back to the Manifold: Recovering from Out-of-Distribution States
Åpne denne publikasjonen i ny fane eller vindu >>Back to the Manifold: Recovering from Out-of-Distribution States
Vise andre…
2022 (engelsk)Inngår i: 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, s. 8660-8666Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Learning from previously collected datasets of expert data offers the promise of acquiring robotic policies without unsafe and costly online explorations. However, a major challenge is a distributional shift between the states in the training dataset and the ones visited by the learned policy at the test time. While prior works mainly studied the distribution shift caused by the policy during the offline training, the problem of recovering from out-of-distribution states at the deployment time is not very well studied yet. We alleviate the distributional shift at the deployment time by introducing a recovery policy that brings the agent back to the training manifold whenever it steps out of the in-distribution states, e.g., due to an external perturbation. The recovery policy relies on an approximation of the training data density and a learned equivariant mapping that maps visual observations into a latent space in which translations correspond to the robot actions. We demonstrate the effectiveness of the proposed method through several manipulation experiments on a real robotic platform. Our results show that the recovery policy enables the agent to complete tasks while the behavioral cloning alone fails because of the distributional shift problem.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2022
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-324860 (URN)10.1109/IROS47612.2022.9981315 (DOI)000909405301050 ()2-s2.0-85146319849 (Scopus ID)
Konferanse
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 23-27, 2022, Kyoto, JAPAN
Merknad

QC 20230322

Tilgjengelig fra: 2023-03-22 Laget: 2023-03-22 Sist oppdatert: 2024-03-04bibliografisk kontrollert
7. Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks
Åpne denne publikasjonen i ny fane eller vindu >>Harmonics of Learning: Universal Fourier Features Emerge in Invariant Networks
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Abstract [en]

In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group. This provides a mathematical explanation for the emergence of Fourier features -- a ubiquitous phenomenon in both biological and artificial learning systems. The results hold even for non-commutative groups, in which case the Fourier transform encodes all the irreducible unitary group representations. Our findings have consequences for the problem of symmetry discovery. Specifically, we demonstrate that the algebraic structure of an unknown group can be recovered from the weights of a network that is at least approximately invariant within certain bounds. Overall, this work contributes to a foundation for an algebraic learning theory of invariant neural network representations.

HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-344128 (URN)
Merknad

QC 20240304

Tilgjengelig fra: 2024-03-02 Laget: 2024-03-02 Sist oppdatert: 2024-03-04bibliografisk kontrollert

Open Access i DiVA

fulltext(11206 kB)962 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 11206 kBChecksum SHA-512
102ce06d44b0cc35b97081c389bf84f1f55b2f98b89a36584d637dcfcadd6765587ffbecdcd69fb7a6f06809e55285c552f91840dfc13347f437abaed136d9fa
Type fulltextMimetype application/pdf

Person

Marchetti, Giovanni Luca

Søk i DiVA

Av forfatter/redaktør
Marchetti, Giovanni Luca
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 962 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 2238 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf