Download PDFOpen PDF in browserCurrent versionInterpretable Model-based Hierarchical Reinforcement Learning Using Inductive Logic ProgrammingEasyChair Preprint 5668, version 112 pages•Date: June 3, 2021AbstractRecently deep reinforcement learning has achieved many success in wide range of applications, but it notoriously lacks data-efficiency and interpretability. Data-efficiency is important as interacting with the environment is expensive. Interpretability can increase the transparency of the black-box-style deep RL model and gain trust from the users of RL systems. In this work, we propose a new hierarchical framework of symbolic RL, leveraging a symbolic transition model to improve the data-efficiency and introduce the interpretability of learned policy. This framework consists of a high-level agent, a subtask solver and a symbolic transition model. Without assuming any prior knowledge on the state transition, we adopt inductive logic programming (ILP) to learn the rules of symbolic state transitions, introducing interpretability and making the learned behavior understandable to users. In empirical experiments, we confirmed that the data-efficiency of the proposed framework over previous methods can be improved by 30%~40%. Keyphrases: Inductive Logic Programming, Reinforcement Learning, hierarchical learning, planning
|