학술논문

Robot Failure Mode Prediction with Explainable Machine Learning
Document Type
Conference
Source
2020 IEEE 16th International Conference on Automation Science and Engineering (CASE) Automation Science and Engineering (CASE), 2020 IEEE 16th International Conference on. :61-66 Aug, 2020
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Engineering Profession
General Topics for Engineers
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Robots
Predictive models
Data models
Measurement
Additives
Analytical models
Logistics
Language
ISSN
2161-8089
Abstract
The ability to determine whether a robot’s grasp has a high chance of failing, before it actually does, can save significant time and avoid failures by planning for re-grasping or changing the strategy for that special case. Machine Learning (ML) offers one way to learn to predict grasp failure from historic data consisting of a robot’s attempted grasps alongside labels of the success or failure. Unfortunately, most powerful ML models are black-box models that do not explain the reasons behind their predictions. In this paper, we investigate how ML can be used to predict robot grasp failure and study the tradeoff between accuracy and interpretability by comparing interpretable (white box) ML models that are inherently explainable with more accurate black box ML models that are inherently opaque. Our results show that one does not necessarily have to compromise accuracy for interpretability if we use an explanation generation method, such as Shapley Additive explanations (SHAP), to add explainability to the accurate predictions made by black box models. An explanation of a predicted fault can lead to an efficient choice of corrective action in the robot’s design that can be taken to avoid future failures.