kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Rodríguez Gálvez, BorjaORCID iD iconorcid.org/0000-0002-0862-1333
Alternative names
Publications (10 of 25) Show all publications
Gouverneur, A., Rodríguez Gálvez, B., Oechtering, T. J. & Skoglund, M. (2025). An Information-Theoretic Analysis of Thompson Sampling with Infinite Action Spaces. In: IEEE (Ed.), ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP): . Paper presented at International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, April 6-11, 2025. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>An Information-Theoretic Analysis of Thompson Sampling with Infinite Action Spaces
2025 (English)In: ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) / [ed] IEEE, Institute of Electrical and Electronics Engineers (IEEE) , 2025Conference paper, Published paper (Refereed)
Abstract [en]

This paper studies the Bayesian regret of the Thompson Sampling algorithm for bandit problems, building on the information-theoretic framework introduced by Russo and Van Roy [1]. Specifically, it extends the rate-distortion analysis of Dong and Van Roy [2], which provides near-optimal bounds for linear bandits. A key limitation of these results is the assumption of a finite action space. We address this by extending the analysis to settings with infinite and continuous action spaces. Additionally, we specialize our results to bandit problems with expected rewards that are Lipschitz continuous with respect to the action space, deriving a regret bound that explicitly accounts for the complexity of the action space.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-373210 (URN)10.1109/ICASSP49660.2025.10888239 (DOI)2-s2.0-105003867651 (Scopus ID)
Conference
International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, April 6-11, 2025
Note

QC 20251124

Available from: 2025-11-21 Created: 2025-11-21 Last updated: 2025-11-24Bibliographically approved
Gouverneur, A., Rodríguez Gálvez, B., Oechtering, T. J. & Skoglund, M. (2025). Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems.
Open this publication in new window or tab >>Chained Information-Theoretic Bounds and Tight Regret Rate for Linear Bandit Problems
2025 (English)In: Article in journal (Refereed) Submitted
Abstract [en]

This paper studies the Bayesian regret of a variant of the Thompson-Sampling algorithm for bandit problems. It builds upon the information-theoretic framework of [1] and, more specifically, on the rate-distortion analysis from [2], where they proved a bound with regret rate of O(d T log(T))for the d-dimensional linear bandit setting. We focus on bandit problems with a metric action space and, using a chaining argument, we establish new bounds that depend on the metric entropy of the action space for a variant of Thompson-Sampling. Under suitable continuity assumption of the rewards, our bound offers a tight rate of O(d√T) for d-dimensional linear bandit problems.

National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-373211 (URN)
Note

QC 20251124

Available from: 2025-11-21 Created: 2025-11-21 Last updated: 2025-11-24Bibliographically approved
Lindström, M., Rodríguez Gálvez, B., Thobaben, R. & Skoglund, M. (2024). A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry. In: Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM): . Paper presented at ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, ICML 2024 Workshop GRaM, Vienna, Austria, Jul 29 2024 (pp. 78-91). PMLR, 251
Open this publication in new window or tab >>A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry
2024 (English)In: Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM), PMLR , 2024, Vol. 251, p. 78-91Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal.

Place, publisher, year, edition, pages
PMLR, 2024
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 251
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-358325 (URN)
Conference
ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, ICML 2024 Workshop GRaM, Vienna, Austria, Jul 29 2024
Funder
Swedish Research Council, 2021-05266Swedish Research Council, 2019-03606Swedish Research Council, 2022-06725
Note

QC 20250114

Available from: 2025-01-13 Created: 2025-01-13 Last updated: 2025-01-14Bibliographically approved
Lindström, M., Rodríguez Gálvez, B., Thobaben, R. & Skoglund, M. (2024). A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry. In: Proceedings of the Geometry-Grounded Representation Learning and Generative Modeling Workshop, GRaM 2024 at ICML 2024: . Paper presented at 1st Geometry-Grounded Representation Learning and Generative Modeling Workshop, GRaM 2024 at the 41st International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 29, 2024 (pp. 78-91). ML Research Press
Open this publication in new window or tab >>A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry
2024 (English)In: Proceedings of the Geometry-Grounded Representation Learning and Generative Modeling Workshop, GRaM 2024 at ICML 2024, ML Research Press , 2024, p. 78-91Conference paper, Published paper (Refereed)
Abstract [en]

Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal.

Place, publisher, year, edition, pages
ML Research Press, 2024
National Category
Probability Theory and Statistics Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-359860 (URN)2-s2.0-85216611518 (Scopus ID)
Conference
1st Geometry-Grounded Representation Learning and Generative Modeling Workshop, GRaM 2024 at the 41st International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 29, 2024
Note

QC 20250213

Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-02-13Bibliographically approved
Lindström, M., Rodriguez Galvez, B., Thobaben, R. & Skoglund, M. (2024). A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry. In: Vadgama, S Bekkers, E Pouplin, A Kaba, SO Walters, R Lawrence, H Emerson, T Kvinge, H Tomczak, J Jegelka, S (Ed.), Geometry-Grounded Representation Learning And Generative Modeling Workshop, Gram At ICML 2024: . Paper presented at 2024 Geometry-grounded Representation Learning and Generative Modeling Workshop-GRaM, JUL 29, 2024, Vienna, AUSTRIA. JMLR-JOURNAL MACHINE LEARNING RESEARCH, 251
Open this publication in new window or tab >>A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry
2024 (English)In: Geometry-Grounded Representation Learning And Generative Modeling Workshop, Gram At ICML 2024 / [ed] Vadgama, S Bekkers, E Pouplin, A Kaba, SO Walters, R Lawrence, H Emerson, T Kvinge, H Tomczak, J Jegelka, S, JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024, Vol. 251Conference paper, Published paper (Refereed)
Abstract [en]

Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal. GitHub: martinlindstrom/coding theoretic hpl

Place, publisher, year, edition, pages
JMLR-JOURNAL MACHINE LEARNING RESEARCH, 2024
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-371852 (URN)001479783300006 ()
Conference
2024 Geometry-grounded Representation Learning and Generative Modeling Workshop-GRaM, JUL 29, 2024, Vienna, AUSTRIA
Note

QC 20251104

Available from: 2025-11-04 Created: 2025-11-04 Last updated: 2025-11-04Bibliographically approved
Rodríguez Gálvez, B., Rivasplata, O., Thobaben, R. & Skoglund, M. (2024). A Note on Generalization Bounds for Losses with Finite Moments. In: 2024 IEEE International Symposium on Information Theory, ISIT 2024 - Proceedings: . Paper presented at 2024 IEEE International Symposium on Information Theory, ISIT 2024, Athens, Greece, Jul 7 2024 - Jul 12 2024 (pp. 2676-2681). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>A Note on Generalization Bounds for Losses with Finite Moments
2024 (English)In: 2024 IEEE International Symposium on Information Theory, ISIT 2024 - Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 2676-2681Conference paper, Published paper (Refereed)
Abstract [en]

This paper studies the truncation method from Alquier [1] to derive high-probability PAC-Bayes bounds for unbounded losses with heavy tails. Assuming that the p-th moment is bounded, the resulting bounds interpolate between a slow rate 1/√n when p=2, and a fast rate 1/n when p→∞ and the loss is essentially bounded. Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance. This bound has an exponentially better dependence on the confidence parameter and the dependency measure than previous bounds in the literature. Finally, the paper extends all results to guarantees in expectation and single-draw PAC-Bayes. In order to so, it obtains analogues of the PAC-Bayes fast rate bound for bounded losses from [2] in these settings. The full version of the paper can be found in https://arxiv.org/abs/2403.16681.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
National Category
Probability Theory and Statistics Mathematical Analysis
Identifiers
urn:nbn:se:kth:diva-353510 (URN)10.1109/ISIT57864.2024.10619194 (DOI)001304426902133 ()2-s2.0-85202842028 (Scopus ID)
Conference
2024 IEEE International Symposium on Information Theory, ISIT 2024, Athens, Greece, Jul 7 2024 - Jul 12 2024
Note

Part of ISBN 9798350382846

QC 20240919

Available from: 2024-09-19 Created: 2024-09-19 Last updated: 2025-12-05Bibliographically approved
Rodríguez Gálvez, B. (2024). An Information-Theoretic Approach to Generalization Theory. (Doctoral dissertation). KTH Royal Institute of Technology
Open this publication in new window or tab >>An Information-Theoretic Approach to Generalization Theory
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In this thesis, we investigate the in-distribution generalization of machine learning algorithms, focusing on establishing rigorous upper bounds on the generalization error. We depart from traditional complexity-based approaches by introducing and analyzing information-theoretic bounds that quantify the dependence between a learning algorithm and the training data.

We consider two categories of generalization guarantees:

  • Guarantees in expectation. These bounds measure performance in the average case. Here, the dependence between the algorithm and the data is often captured by the mutual information or other information measures based on ƒ-divergences. While these measures offer an intuitive interpretation, they might overlook the geometry of the algorithm's hypothesis class. To address this limitation, we introduce bounds using the Wasserstein distance, which incorporates geometric considerations at the cost of being mathematically more involved. Furthermore, we propose a structured, systematic method to derive bounds capturing the dependence between the algorithm and an individual datum, and between the algorithm and subsets of the training data, conditioned on knowing the rest of the data. These types of bounds provide deeper insights, as we demonstrate by applying them to derive generalization error bounds for the stochastic gradient Langevin dynamics algorithm.      
  • PAC-Bayesian guarantees. These bounds measure the performance level with high probability. Here, the dependence between the algorithm and the data is often measured by the relative entropy. We establish connections between the Seeger--Langford and Catoni's bounds, revealing that that the former is optimized by the Gibbs posterior. Additionally, we introduce novel, tighter bounds for various types of loss functions, including those with a bounded range, cumulant generating function, moment, or variance. To achieve this, we introduce a new technique to optimize parameters in probabilistic statements.

We also study the limitations of these approaches. We present a counter-example where most of the existing (relative entropy-based) information-theoretic bounds fail, and where traditional approaches do not. Finally, we explore the relationship between privacy and generalization. We show that algorithms with a bounded maximal leakage generalize. Moreover, for discrete data, we derive new bounds for differentially private algorithms that vanish as the number of samples increases, thus guaranteeing their generalization even with a constant privacy parameter. This is in contrast with previous bounds in the literature, that require the privacy parameter to decrease with the number of samples to ensure generalization.

Abstract [sv]

I denna avhandling undersöker vi maskininlärningsalgoritmers generaliseringsförmåga för likafördelad data, med fokus på att upprätta stränga övre gränser för generaliseringsfelet. Vi avviker från traditionella komplexitetsbaserade metoder genom att introducera och analysera informationsteoretiska gränser som kvantifierar beroendet mellan en inlärningsalgoritm och träningsdata.

Vi studerar två kategorier av generaliseringsgarantier:

  • Väntevärdegarantier. Dessa gränser mäter prestanda i genomsnitt. I detta fall fångas beroendet mellan algoritmen och data ofta av ömsesidig information eller andra informationsmått baserade på ƒ-divergenser. Trots att dessa mått erbjuder en intuitiv tolkning, kan de förbise geometrin hos algoritmens hypotesklass. För att hantera denna begränsning introducerar vi gränser som använder Wassersteinavstånd, vilket innehåller geometriska överväganden på bekostnad av att vara mer matematiskt invecklat. Dessutom föreslår vi en strukturerad, systematisk metod för att härleda gränser som fångar beroendet mellan algoritmen och en enskild datapunkt, samt mellan algoritmen och delmängder av träningsdata, under förutsättning att resten av datan är känd. Dessa typer av gränser ger djupare insikter, vilket vi demonstrerar genom att tillämpa dem för att härleda generaliseringsgarantier för algoritmen stokastisk gradient Langevindynamik.   
  • PAC-Bayesiska garantier. Dessa gränser mäter prestandanivån med hög sannolikhet. I detta fall fångas beroendet mellan algoritmen och data ofta av relativ entropi. Vi härleder samband mellan Seeger--Langfords- och Catonis gränser, vilket avslöjar att den förra optimeras av Gibbsfördelningen. Dessutom introducerar vi nya, starkare gränser för olika typer av kostnadsfunktioner, inklusive de med begränsad värdemängd, momentgenererande funktion, moment, eller varians. För att uppnå detta introducerar vi en ny teknik för att optimera parametrar i sannolikhetsuttryck.

Vi studerar också begränsningarna för dessa metoder. Vi presenterar ett motexempel där de flesta av de existerande (relativ entropibaserade) informationsteoretiska gränserna misslyckas, och där traditionella metoder fungerar. Slutligen utforskar vi sambandet mellan dataintegritet och generaliseringsförmåga. Vi visar att algoritmer med ett begränsad maximalt läckage generaliserar. Dessutom härleder vi för diskreta data nya gränser för algoritmer med differentiellt dataintegritet, som försvinner när antal sampel ökar, vilket garanterar deras generaliseringsförmåga även med en konstant dataintegritetsparameter. Detta står i kontrast till tidigare gränser i litteraturen, som kräver att dataintegritetsparametern minskar med antalet sampel för att säkerställa generaliseringsförmåga.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2024. p. 259
Series
TRITA-EECS-AVL ; 2024:31
Keywords
Generalization, Information-Theoretic Bounds
National Category
Computer Sciences
Research subject
Electrical Engineering
Identifiers
urn:nbn:se:kth:diva-344833 (URN)978-91-8040-878-3 (ISBN)
Public defence
2024-04-23, F3, Lindstedtsvägen 26, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 2019-03606
Note

QC 20240402

Available from: 2024-04-02 Created: 2024-04-02 Last updated: 2024-04-15Bibliographically approved
Rodríguez Gálvez, B., Thobaben, R. & Skoglund, M. (2024). More PAC-Bayes bounds: From bounded losses, to losses with general tail behaviors, to anytime validity. Journal of machine learning research, 25, 1-43
Open this publication in new window or tab >>More PAC-Bayes bounds: From bounded losses, to losses with general tail behaviors, to anytime validity
2024 (English)In: Journal of machine learning research, ISSN 1532-4435, E-ISSN 1533-7928, Vol. 25, p. 1-43Article in journal (Refereed) Published
Abstract [en]

In this paper, we present new high-probability PAC-Bayes bounds for different types of losses. Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values. This leads to new fast-rate and mixed-rate bounds that are interpretable and tighter than previous bounds in the literature. In particular, the fast-rate bound is equivalent to the Seeger-Langford bound. Secondly, for losses with more general tail behaviors, we introduce two new parameter-free bounds: a PAC-Bayes Chernoff analogue when the loss' cumulative generating function is bounded, and a bound when the loss' second moment is bounded. These two bounds are obtained using a new technique based on a discretization of the space of possible events for the "in probability" parameter optimization problem. This technique is both simpler and more general than previous approaches optimizing over a grid on the parameters' space. Finally, using a simple technique that is applicable to any existing bound, we extend all previous results to anytime-valid bounds.

Place, publisher, year, edition, pages
MICROTOME PUBL, 2024
Keywords
Generalization bounds, PAC-Bayes bounds, concentration inequalities, rate, of convergence (fast, slow, mixed), tail behavior, parameter optimization.
National Category
Mathematical Analysis Computer Sciences
Identifiers
urn:nbn:se:kth:diva-345988 (URN)001203119000001 ()2-s2.0-105018668397 (Scopus ID)
Note

Not duplicate with DiVA 1848241

QC 20240430

Available from: 2024-04-30 Created: 2024-04-30 Last updated: 2025-11-07Bibliographically approved
Zamani, A., Rodríguez Gálvez, B. & Skoglund, M. (2024). On Information Theoretic Fairness: Compressed Representations with Perfect Demographic Parity. In: 2024 IEEE Information Theory Workshop, ITW 2024: . Paper presented at 2024 IEEE Information Theory Workshop, ITW 2024, Shenzhen, China, Nov 24 2024 - Nov 28 2024 (pp. 25-30). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>On Information Theoretic Fairness: Compressed Representations with Perfect Demographic Parity
2024 (English)In: 2024 IEEE Information Theory Workshop, ITW 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 25-30Conference paper, Published paper (Refereed)
Abstract [en]

In this article, we study the fundamental limits in the design of fair and/or private representations achieving perfect demographic parity and/or perfect privacy through the lens of information theory. More precisely, given some useful data X that we wish to employ to solve a task T, we consider the design of a representation Y that has no information of some sensitive attribute or secret s, that is, such that I(Y;S)=0. We consider two scenarios. First, we consider a design desiderata where we want to maximize the information I(Y;T) that the representation contains about the task, while constraining the level of compression (or encoding rate), that is, ensuring that I(Y;X)≤q r. Second, inspired by the Conditional Fairness Bottleneck problem, we consider a design desiderata where we want to maximize the information I(Y, Tvert S) that the representation contains about the task which is not shared by the sensitive attribute or secret, while constraining the amount of irrelevant information, that is, ensuring that I(Y;Xvert T, S)≤q r. In both cases, we employ extended versions of the Functional Representation Lemma and the Strong Functional Representation Lemma and study the tightness of the obtained bounds. Every result here can also be interpreted as a coding with perfect privacy problem by considering the sensitive attribute as a secret.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-359866 (URN)10.1109/ITW61385.2024.10807019 (DOI)001433908800005 ()2-s2.0-85216512755 (Scopus ID)
Conference
2024 IEEE Information Theory Workshop, ITW 2024, Shenzhen, China, Nov 24 2024 - Nov 28 2024
Note

Part of ISBN 9798350348934]

QC 20250213

Available from: 2025-02-12 Created: 2025-02-12 Last updated: 2025-04-30Bibliographically approved
Haghifam, M., Rodríguez Gálvez, B., Thobaben, R., Skoglund, M., Roy, D. M. & Dziugaite, G. K. (2023). Limitations of information: theoretic generalization bounds for gradient descent methods in stochastic convex optimization. In: Shipra Agrawal, Francesco Orabona (Ed.), Proceedings of ALT 2023: . Paper presented at 34th International Conference on Algorithmic Learning Theory, ALT 2023, Singapore, 20 - 23 February 2023 (pp. 663-706). ML Research Press
Open this publication in new window or tab >>Limitations of information: theoretic generalization bounds for gradient descent methods in stochastic convex optimization
Show others...
2023 (English)In: Proceedings of ALT 2023 / [ed] Shipra Agrawal, Francesco Orabona, ML Research Press , 2023, p. 663-706Conference paper, Published paper (Refereed)
Abstract [en]

To date, no “information-theoretic” frameworks for reasoning about generalization error have been shown to establish minimax rates for gradient descent in the setting of stochastic convex optimization. In this work, we consider the prospect of establishing such rates via several existing information-theoretic frameworks: input-output mutual information bounds, conditional mutual information bounds and variants, PAC-Bayes bounds, and recent conditional variants thereof. We prove that none of these bounds are able to establish minimax rates. We then consider a common tactic employed in studying gradient methods, whereby the final iterate is corrupted by Gaussian noise, producing a noisy “surrogate” algorithm. We prove that minimax rates cannot be established via the analysis of such surrogates. Our results suggest that new ideas are required to analyze gradient descent using information-theoretic techniques. 

Place, publisher, year, edition, pages
ML Research Press, 2023
Series
Proceedings of Machine Learning Research, ISSN 2640-3498 ; 201
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-328375 (URN)001227262400022 ()2-s2.0-85161238002 (Scopus ID)
Conference
34th International Conference on Algorithmic Learning Theory, ALT 2023, Singapore, 20 - 23 February 2023
Note

QC 20231204

Available from: 2023-06-08 Created: 2023-06-08 Last updated: 2024-07-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0862-1333

Search in DiVA

Show all publications