kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Asymmetric Learning in Convex Games
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures.ORCID iD: 0000-0001-6464-492X
Tongji University, Shanghai Institute of Intelligent Science and Technology, Shanghai, China, 201804; Massachusetts Institute of Technology, Lab for Information & Decision Systems, Cambridge, MA, USA, 02139.
Duke University, Department of Mechanical Engineering and Materials Science, Durham, NC, USA.
Duke University, Department of Mechanical Engineering and Materials Science, Durham, NC, USA.
Show others and affiliations
2025 (English)In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523Article in journal (Refereed) Epub ahead of print
Abstract [en]

This paper considers convex games involving multiple agents that aim to minimize their own cost functions using locally available information. A common assumption in the study of such games is that the agents are symmetric, meaning that they have access to the same type of information. Here we lift this assumption, which is often violated in practice, and instead consider asymmetric agents; specifically, we assume some agents have access to first-order gradient information and others have access to the zeroth-order oracles (cost function evaluations). We propose an asymmetric learning algorithm that combines the agent information mechanisms. We analyze the regret and Nash equilibrium convergence of this algorithm for convex and strongly monotone games, respectively. Specifically, we show that our algorithm always performs between pure first- and zeroth-order methods, and can match the performance of these two extremes by adjusting the number of agents with access to zeroth-order oracles. Therefore, our algorithm incorporates the pure first- and zeroth-order methods as special cases. We provide numerical experiments on a market problem for both deterministic and risk-averse games to demonstrate the performance of the proposed algorithm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2025.
Keywords [en]
Asymmetric learning, convex games, Nash equilibrium, regret analysis
National Category
Computer Sciences Probability Theory and Statistics Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-371981DOI: 10.1109/TAC.2025.3613891Scopus ID: 2-s2.0-105017263458OAI: oai:DiVA.org:kth-371981DiVA, id: diva2:2009469
Note

QC 20251028

Available from: 2025-10-28 Created: 2025-10-28 Last updated: 2025-10-28Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wang, ZifanJohansson, Karl H.

Search in DiVA

By author/editor
Wang, ZifanJohansson, Karl H.
By organisation
Decision and Control Systems (Automatic Control)Digital futures
In the same journal
IEEE Transactions on Automatic Control
Computer SciencesProbability Theory and StatisticsControl Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 111 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf