AI Series: Explainable AI

What is explainable AI? Why do we need it? And most importantly, why is it growing in significance?

In our interview, Professor Ute Schmid, head of the Comprehensible Artificial Intelligence project group and professor at the University of Bamberg, and Dominik Seuss, head of the Intelligent Systems group and head of the Image Analysis and Pattern Recognition business unit, describe in an interview how AI is constantly expanding to include more and more applications and further areas of implementation and explain why humans require transparency and explainability in AI systems.

 

In the context of machine learning, experts differentiate between “black-box” and “white-box” methods. This designation is used to indicate whether the learned model produces results in a format that can be understood by a human being. Why do black-box methods exist in the first place – why aren’t white-box methods standard?

 

Prof. Schmid: Different machine learning approaches are each adapted to solving different types of problems. White-box methods are very successful in applications where the attributes used to describe the data to be classified are clearly defined. Generally speaking, that applies when the data relates to unique, nameable properties such as size, weight, color, or demographic characteristics, that is, if data are available in tabular format or as structured objects.

With image data, by contrast – where you’re dealing with contrasts, color gradients, and textures – it’s not always easy to decide which information is relevant for making decisions. This kind of information is also difficult to put into words. Here black-box methods, such as Convolutional Neural Networks (CNNs) which allow end-to-end learning, that is, learning can be performed directly on the raw data without the need to feature extraxtion.


Dominik Seuss: There are advantages to both methods. Black-box approaches tend to produce more precise predictions because they don’t have to extract any attributes that can be understood by humans. They are able to analyze highly complex, high-dimensional dependencies and that frequently makes them a lot more powerful than white-box methods. It is precisely this quality, however, that makes it difficult to determine whether the network is employing relevant or purely random attributes when it makes decisions. For example, it might be interpreting the background color as a key parameter in its decisions.

Explainable AI is often mentioned in the context of the “third wave of AI.” How are the two related?

 

Prof. Schmid: The term “third wave” is based on historical phases in artificial intelligence research, each of which is defined by the dominant methods of the time.

The first wave of AI is best summed up in the term “describe.” This refers to knowledge-based methods, knowledge representation and automatic reasoning techniques. For these approaches it is required that knowledge in a particular area is modelled manually in a process referred to as “knowledge engineering.”

The second wave, can be summed up  as “learn,” is focused on purely data-driven, statistical approaches to machine learning, such as support-vector machines and artificial neural networks. Explainability by humans does not  play a role when the quality of these methods is evaluated . The only relevant criterion is predictive performance, that is  the estimated number of errors the learned model will make when confronted with unseen data.

Since AI moves more and more from research laboratories to an increasing number of applications, we are starting to recognize that when we want to use machine learning in practice, it is critically important to ensure transparency and explainability, as well as adaptivity, that is, the ability of the model to adapt to different contexts in complex socio-technical systems. That insight has triggered the third wave of AI, which is summed up in the term  “explain.”

The Comprehensible Artificial Intelligence project group is a collaboration between the University of Bamberg and Fraunhofer IIS. The group researches the potential of explainable machine learning. Why was the group founded in the first place? What are the practical applications of your findings?  

 

Prof. Schmid: Explainable AI refers to methods that allow to make the decisions of AI systems transparent and explainable. This applies in particular to systems based on machine learning. Complex neural networks can be opaque and their decisions difficult to explain, even for the developers themselves.

Currently, extensive research is being conducted on different methods  to highlight and visualize the key aspects of the input data  that influenced the neural network’s decision. Take, for example, a diagnosis based on a medical image. The developer can use visualizations to determine whether the learned model is accurate or overfitted, for example, because it is correlating arelevant attribute with an irrelevant attribute, such as the background color. Users – in this case, medical professionals – are more likely to need explanations of the diagnosis itself, and these types of explanations are often best expressed verbally. In the medical example, the diagnostic decision could be explained in a way which relates to the input data, for example, that a particular type of tumor was identified due to the location and type of tissue affected.


Dominik Seuss: Right now, we’re working on pain recognition, for example. There isn’t much data available for this use case, so neural networks tend to identify false correlations in the data. In the worst cases, they simply try to “memorize” the training data. So, we use visualization techniques, or techniques such as layer-wise relevance propagation and others, that show us which parts of the input image had the greatest impact on the neural network’s decision. In the next stage, we can integrate existing knowledge from psychology experts into the network modelling process. We can do that by building targeted restrictions into the learning process. This means that we prevent the network from learning potential correlations that are physiologically impossible. This allows us to produce robust models, even when we only have limited data available.


Prof. Schmid: Of course, explainability is not only relevant to medical diagnostics; it is also critical for all kinds of fields in which decisions can have serious consequences, for example when it comes to controlling production processes or mobility and autonomous vehicles.

To what extent are the interdisciplinary methods developed by your group – particularly those related to explainable AI for medical and automotive applications – intended to benefit the ADA Lovelace Center?

 

Prof. Schmid: I’m currently supervising several PhD students, both in the Image Analysis and Pattern Recognition group as well as at the ADA Lovelace Center. I think it’s very exciting and inspiring to work together with colleagues who draw on wide-ranging technical perspectives and areas of expertise. The added value in the approaches we develop as part of our research, not least of which is the fact that they’ve been proven in practice.

In the Comprehensible Artificial Intelligence project group, we focus on generating multimodal explanations, a research area inspired by work in the field of cognitive psychology. Another key aspect in assessing a system’s decisions is to know how the system came to a particular decision.

 

Dominik Seuss: Knowing that helps us solve a wide range of problems. One example is when we fuse complementary data sources.

Since every decision made by a classifier is a source of uncertainty, our fusion algorithms provide opportunities to respond dynamically to changing environmental conditions.

Let’s say a vehicle sensor is damaged, and as a result, it’s supplying noisy data. The classifier recognizes that the data doesn’t reflect the properties it was trained to process and assigns a high level of uncertainty to any decisions based on this data. As a result, when the data is fused, the system may respond either by relying more heavily on other modalities or by triggering an emergency, for example, pulling off to the side of the road.

Definitions and explanations#

  • #

    Explainable AI refers to methods that help to make the decisions of AI systems transparent and interpretable. This is necessary, for example, when anAI-System searches for its own solution to a problem,for instance, when a deep learning approach bases its decisions on attributes that have not been specified by the developer.

    The term “third wave” is based on historical phases in artificial intelligence research, each of which is defined by the dominant methods of the time – the first wave can be labelled as “describe”, the second as “learn”, and the third as “explain”.

  • #

    The Comprehensible Artificial Intelligence project group is a collaboration between the University of Bamberg and Fraunhofer IIS. The group is supervised by Professor Ute Schmid.

  • #

    Decision rules are represented as tree diagrams with attributes (such as color, temperature, etc.) as nodes, values (e.g., red/yellow/green, high/low, or numerical ranges) as branches, and classes as leaves.

    Decision tree algorithms belong to the earliest learning algorithms.

  • #

    Linear regression is a common statistical method used to model the relationship between one or more (independent) variables (referred to as “attributes” in machine learning) and an (independent) variable (referred to as the “target prediction value” in machine learning) in the form of a linear equation.

  • #

    Artificial neural networks are roughly designed to mimic the human brain and its neurons.

  • #

    A mathematical method for classifying objects.

  • #

    In the context of machine learning,one can differentiate between “black-box” and “white-box” methods. This designation is used to indicate whether the machine-learned model produces results in a format that can be understood by a human being. In this sense, artificial neural networks are black boxes while decision trees or linear regression models are white boxes.

  • #

    Generally speaking, all machine learning is data driven. Observed regularities in data are used to induce a generalized model. Some machine learning techniques incorporate both data and prior knowledge. The different machine learning strategies can be categorized by their level of data intensity. For example, (deep) neural networks require large quantities of data, while decision tree methods and regression models can also be applied to smaller datasets.

  • #

    Traditional approaches to machine learning require data to be input as attribute vectors (similar to tables, but with one instance per line). This requirement often necessitates complex preprocessing, known as “feature extraction”. For example, when processing image data, color, texture, and other even more complex form attributes must be extracted using image processing techniques before the data can be input into a machine learning operation. Specialized architectures for artificial neural networks, for example convolutional neural networks (CNNs) enable networks to learn directly from raw data. Known as “end-to-end learning,” this is the process through which raw data is transformed into model predictions.

  • #

    When people explain a specific fact, the explanation is usually given verbally  , sometimes also by a typical example , or by a contrastive example in which the requirements have explicitly not been met (a “near miss”). Depending on the application, the relevant data may be displayed as images, sound, text, or structured data such as attribute vectors, trees, or graphs.

    If a person wishes to explain why they think a particular image depicts a cat (or a specific type of tumor), they may explain this verbally (because the animal has fur, claws, and whiskers) or visually, by describing the shape of the ears and the whiskers. They may present a typical image of a cat and note the similarities between the images or show a picture of a lynx as a counterexample. The “modalities” of explanations are the different ways in which the information is communicated. If you want to explain learned models generated by machine learning in ways that are most useful in the relevant context, you need algorithmic procedures capable of automatically generating these types of explanations.

  • #

    A technique in explainable machine learning used primarily in the classification of images. LRP can be used to identify the specific information from the input data that the (deep) neural network has employed to make a classification decision. This technique can reveal “Clever Hans” phenomena in which the network is making decisions based on irrelevant information; for example, it recognizes a horse in a picture simply because the image contains grass. LRP was developed at the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute.

Contact

Your questions and suggestions are always welcome!

Please do not hesitate to email us.

Stay connected

The newsletter

Sign up for the the Fraunhofer IIS Magazine newsletter and get the hotlist topics in one email.

Home page

Back to home page of Fraunhofer IIS Magazine