The behavioral policy learned by the AI-agent is therefore shaped by the training data, the state representation of the environment and the reward given by the environment. This allows to optimize the behavior towards multiple desired objectives, such as safety and effectiveness.
Dependable = Explainable and Reliable
The flexibility of designing the behavior offers the chance to tailor it towards application needs. To be trustworthy, additional measures for explainability and reliability are taken into account. Transparency is introduced through explainable actions and understandable decisions of the AI. From the learned strategies, hierarchical rules are extracted. Reliability is ensured through explicit detection and treatment of risks in the individual application scenario. Where needed, safety can be guaranteed through formal-logic verification of extracted strategies in a formal representation of the scenario and safety constraints. Through the unified design method that makes the algorithms both explainable and reliable, a truly dependable AI is created.
Use case: Autonomous driving
As the dependable reinforcement learning is best applied in a design framework geared towards safety, we develop a dependable driving assistant in order to demonstrate our technology within the ADA Lovelace Center for Analytics, Data and Applications.
The typical processing pipeline of autonomous vehicles is made up from perception, behavioral planning, motion planning and actuators. We focus on the implementation of behavioral planning by reinforcement learning.