학술논문

HIGhER: Improving instruction following with Hindsight Generation for Experience Replay
Document Type
Conference
Source
2020 IEEE Symposium Series on Computational Intelligence (SSCI) Computational Intelligence (SSCI), 2020 IEEE Symposium Series on. :225-232 Dec, 2020
Subject
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
Trajectory
Training
Task analysis
Visualization
Navigation
Linguistics
Generators
Reinforcement Learning
Representation Learning
Natural Language Processing
Language
Abstract
Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality. While these characterizations may foster instructing, conditioning or structuring interactive agent behavior, it remains an open-problem to correctly relate language understanding and reinforcement learning in even simple instruction following scenarios. This joint learning problem is alleviated through expert demonstrations, auxiliary losses, or neural inductive biases. In this paper, we propose an orthogonal approach called Hindsight Generation for Experience Replay (HIGhER) that extends the Hindsight Experience Replay approach to the language-conditioned policy setting. Whenever the agent does not fulfill its instruction, HIGhER learns to output a new directive that matches the agent trajectory, and it relabels the episode with a positive reward. To do so, HIGhER learns to map a state into an instruction by using past successful trajectories, which removes the need to have external expert interventions to relabel episodes as in vanilla HER. We show the efficiency of our approach in the BabyAI environment, and demonstrate how it complements other instruction following methods.