Download PDFOpen PDF in browserCurrent version

Grounding Subgoals of Temporal Logic Tasks in Online Reinforcement Learning

EasyChair Preprint 12418, version 1

Versions: 12history
12 pagesDate: March 7, 2024

Abstract

Recently, there has been a surge of research papers investigating reinforcement learning (RL) algorithms for solving temporal logic (TL) tasks. However, these algorithms are built upon the assumption of a labeling function which can map raw observations into symbols of subgoals in completing the TL task. In many practical applications, however, this labeling function often is not readily available. In this work, we propose an online RL algorithm, referred to as GSTLO, that takes non-symbolic raw input observations from the collected trajectories and learn to ground the subgoal symbols of TL tasks. In other words, it learns to label important states that are associated with the subgoals in the TL task. Specifically, to associate an important state to one of the subgoals in the TL formula, the RL agent actively explores the environment by collecting trajectories and gradually reconstructs a finite state machine (FSM) of the TL task composed by the discovered important states. Then, by comparing the reconstructed FSM and the ground truth FSM extracted from the task formula, the mapping from the important states to subgoal symbols is obtained, i.e. resulting in the labeling function. In order to discover these important states, GSTLO formulates a contrastive learning objective based on the first-occupancy representations (FR) of collected trajectories. To facilitate the exploration, the first-occupancy feature (FF) of important states is also learned, driving the agent to visit any selected subgoal and complete unseen tasks without further training. The proposed GSTLO algorithm is evaluated on three environments, showing significant improvement over baseline methods.

Keyphrases: Reinforcement Learning, generalization, symbol grounding, temporal logic

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:12418,
  author    = {Duo Xu and Faramarz Fekri},
  title     = {Grounding Subgoals of Temporal Logic Tasks in Online Reinforcement Learning},
  howpublished = {EasyChair Preprint 12418},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browserCurrent version