kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning Agents
INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-3599-440x
INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
2022 (English)In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) , 2022, p. 1301-1309Conference paper, Published paper (Refereed)
Abstract [en]

This work addresses the problem of sensing the world: how to learn a multimodal representation of a reinforcement learning agent's environment that allows the execution of tasks under incomplete perceptual conditions. To address such problem, we argue for hierarchy in the design of representation models and contribute with a novel multimodal representation model, MUSE. The proposed model learns a hierarchy of representations: low-level modality-specific representations, encoded from raw observation data, and a high-level multimodal representation, encoding joint-modality information to allow robust state estimation. We employ MUSE as the perceptual model of deep reinforcement learning agents provided with multimodal observations in Atari games. We perform a comparative study over different designs of reinforcement learning agents, showing that MUSE allows agents to perform tasks under incomplete perceptual experience with minimal performance loss. Finally, we also evaluate the generative performance of MUSE in literature-standard multimodal scenarios with higher number and more complex modalities, showing that it outperforms state-of-the-art multimodal variational autoencoders in single and cross-modality generation.

Place, publisher, year, edition, pages
International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) , 2022. p. 1301-1309
Keywords [en]
Multimodal Representation Learning, Reinforcement Learning, Unsupervised Learning, Autonomous agents, Deep learning, Learning systems, Multi agent systems, Condition, Encodings, Learn+, Multi-modal, Multimodal perception, Observation data, Reinforcement learning agent, Reinforcement learnings, Representation model
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-326092Scopus ID: 2-s2.0-85134325159OAI: oai:DiVA.org:kth-326092DiVA, id: diva2:1752587
Conference
21st International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2022, 9-13 May 2022
Note

QC 20230424

Available from: 2023-04-24 Created: 2023-04-24 Last updated: 2023-04-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Yin, Hang

Search in DiVA

By author/editor
Yin, Hang
By organisation
Robotics, Perception and Learning, RPL
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf