학술논문

Safe Reinforcement Learning in Uncertain Contexts
Document Type
Periodical
Source
IEEE Transactions on Robotics IEEE Trans. Robot. Robotics, IEEE Transactions on. 40:1828-1841 2024
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Heuristic algorithms
Robots
Safety
Uncertainty
Current measurement
Cameras
Dynamical systems
Frequentist bounds
multiclass classification
safe reinforcement learning
Language
ISSN
1552-3098
1941-0468
Abstract
When deploying machine learning algorithms in the real world, guaranteeing safety is an essential asset. Existing safe learning approaches typically consider continuous variables, i.e., regression tasks. However, in practice, robotic systems are also subject to discrete, external environmental changes, e.g., having to carry objects of certain weights or operating on frozen, wet, or dry surfaces. Such influences can be modeled as discrete context variables. In the existing literature, such contexts are, if considered, mostly assumed to be known. In this work, we drop this assumption and show how we can perform safe learning when we cannot directly measure the context variables. To achieve this, we derive frequentist guarantees for multiclass classification, allowing us to estimate the current context from measurements. Furthermore, we propose an approach for identifying contexts through experiments. We discuss under which conditions we can retain theoretical guarantees and demonstrate the applicability of our algorithm on a Furuta pendulum with camera measurements of different weights that serve as contexts.