摘要:AbstractArtifical neural networks (ANNs) have made their way into marine robotics in the last years, where they are used in control and perception systems, to name a few examples. At the same time, the black-box nature of ANNs is responsible for key challenges related to interpretability and trustworthiness, which need to be addressed if ANNs are to be deployed safely in real-life operations. In this paper, we implement three XAI methods to provide explanations to the decisions made by a deep reinforcement learning agent: Kernel SHAP, LIME and Linear Model Trees (LMTs). The agent was trained via Proximal Policy Optimization (PPO) to perform automatic docking on a fully-actuated vessel. We discuss the properties and suitability of the three methods, and juxtapose them with important attributes of the docking agent to provide context to the explanations.
关键词:KeywordsMarine control systemsExplainable Artificial IntelligenceDeep Reinforcement LearningAutonomous shipsDocking