학술논문

Goal-oriented Trajectories for Efficient Exploration
Document Type
Working Paper
Source
Subject
Computer Science - Machine Learning
Statistics - Machine Learning
Language
Abstract
Exploration is a difficult challenge in reinforcement learning and even recent state-of-the art curiosity-based methods rely on the simple epsilon-greedy strategy to generate novelty. We argue that pure random walks do not succeed to properly expand the exploration area in most environments and propose to replace single random action choices by random goals selection followed by several steps in their direction. This approach is compatible with any curiosity-based exploration and off-policy reinforcement learning agents and generates longer and safer trajectories than individual random actions. To illustrate this, we present a task-independent agent that learns to reach coordinates in screen frames and demonstrate its ability to explore with the game Super Mario Bros. improving significantly the score of a baseline DQN agent.
Comment: ICML 2018 Exploration in RL Workshop, videos: https://sites.google.com/view/got-exploration