kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mitigating Information Asymmetry in Two-Stage Contracts with Non-Myopic Agents
Laboratory for Information and Decision Systems, Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge MA 02139 USA.
Laboratory for Information and Decision Systems, Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge MA 02139 USA.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Digital futures. Laboratory for Information and Decision Systems, Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge MA 02139 USA.ORCID iD: 0000-0001-7932-3109
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We consider a Stackelberg game in which a principal (she) establishes a two-stage contract with a non-myopic agent (he) whose type is unknown. The contract takes the form of an incentive function mapping the agent's first-stage action to his second-stage incentive. While the first-stage action reveals the agent's type under truthful play, a non-myopic agent could benefit from portraying a false type in the first stage to obtain a larger incentive in the second stage. The challenge is thus for the principal to design the incentive function so as to induce truthful play. We show that this is only possible with a constant, non-reactive incentive functions when the type space is continuous, whereas it can be achieved with reactive functions for discrete types. Additionally, we show that introducing an adjustment mechanism that penalizes inconsistent behavior across both stages allows the principal to design more flexible incentive functions.

Place, publisher, year, edition, pages
Elsevier BV , 2024. p. 19-24
Keywords [en]
contract theory, Principal-agent problems, Stackelberg games, strategic learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-360561DOI: 10.1016/j.ifacol.2025.01.150ISI: 001403404200004Scopus ID: 2-s2.0-85218046766OAI: oai:DiVA.org:kth-360561DiVA, id: diva2:1940627
Conference
5th IFAC Workshop on Cyber-Physical Human Systems, CPHS 2024, Antalya, Türkiye, Dec 12 2024 - Dec 13 2024
Note

QC 20250228

Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-09-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Niazi, Muhammad Umar B.

Search in DiVA

By author/editor
Niazi, Muhammad Umar B.
By organisation
Decision and Control Systems (Automatic Control)Digital futures
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 51 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf