Explainable AI Demonstrator

Our research group Comprehensible Artificial Intelligence (CAI) has developed this demonstrator to show how innovative methods of Explainable AI can be used to improve the comprehensibility and trustworthiness of AI systems.

Using a use case from image-based quality control, we demonstrate how the use of machine learning in industry can be usefully extended by XAI methods to help companies achieve greater efficiency and higher product quality.

We invite you to explore our video podcast and actively participate in shaping the future of AI in industry.

Privacy warning

With the click on the play button an external video from www.youtube.com is loaded and started. Your data is possible transferred and stored to third party. Do not start the video if you disagree. Find more about the youtube privacy statement under the following link: https://policies.google.com/privacy

What can you expect from our video podcast?

  • Explanation Methods for Image-Based Analytics: Our demonstrator presents innovative approaches to explaining AI decisions in an industrial context. We focus on comprehensibility in defect classification of components using a Convolutional Neural Network (CNN).
  • Local explanations with near hits and near misses: we show how to create local explanations by presenting similar examples of the same class (near hits) as well as the opposite class (near misses) for a given image instance of a component. This allows tracing the decision boundaries of the learned model as a basis for any necessary adaptation through specific additions to the training examples.
  • Global explanations with prototypes: We introduce global explanations by presenting a typical proxy instance (prototype) of the same class. This will give you a general insight into the decision criteria of classification models.
  • Relevance-based visual and verbalized explanations: Our demonstrator combines relevance-based visual explanations, which use heatmaps to highlight regions in images that are relevant to a defect, with textual explanations. The latter are based on logical theories and provide deeper insights into the context of defect patterns that go beyond classical relevance-based methods.

Why is this so important?

Our two explanation methods enable users and domain experts to gain a deeper understanding of the underlying classification models. This enables them to better assess the limitations of the learned black box model, identify weaknesses and contribute to model improvement through feedback. Thus, we make an important contribution to more transparency and traceability and, as a consequence, to an increase in confidence in the application of AI systems in industry.