Project group: Explainable Artificial Intelligence

How can the results arrived at by deep neural networks (DNNs) and the hidden logic they employ be rendered more intelligible and explicable?

Explainable Artificial Intelligence

Transparency and Intelligibility of AI Systems

For uses of machine learning in practice, it is vital that such applications should be intelligible and explainable

Explainable AI is a key topic of current AI research – the “Third wave of AI,” following on from “Describing” (First wave: knowledge-based systems) and “Statistical learning” (Second wave). It is becoming increasingly obvious that purely data-driven machine learning is unsuitable in many areas of application, or not unless it is combined with further methods.

In collaboration with the University of Bamberg, Fraunhofer IIS has now set up an “Explainable Artificial Intelligence” project group. Its purpose is to develop explainable machine learning methods:

  • We are working on hybrid approaches to machine learning that combine black-box methods, such as (deep) neural networks, with methods applied in interpretable machine learning (white-box methods). Such methods enable a combination of logic and learning – and, in particular, a type of learning that integrates human knowledge.
  • We are developing methods of interactive and incremental learning for areas of application in which there is very limited data available and the labeling of that data is problematic.
  • We are developing algorithms to generate multimodal explanations, particularly for a combination of visual and verbal explanations. For this purpose, we draw on research from cognitive science.

Areas of application currently being studied include image-based medical diagnostics, facial expression analysis, quality control in Manufacturing 4.0, crop phenotyping, and the automotive sector.

eki – Erklärbare KI
© Jessica Deuschel

Our Fields of Research

Interpretable Machine Learning

 

  • Inductive logic programming (ILP)
  • Statistical relational learning
  • Probabilistic logic programming
  • Learning of fuzzy rules
  • Hybrid combination of white-box and black-box learning
  • Methods of rule extraction

 

Generation of Multimodal Explanations

 

  • Context-sensitive approach rather than one size fits all
  • Combination of visual and verbal explanations
  • Contrastive examples, especially near misses
  • Explanation by means of prototypes

Interactive Learning by Means of Reciprocal Explanations

  • Human-AI partnerships based on human-in-the-loop systems
  • Explainable AI as an opportunity to control machine learning through human knowledge, rather than as a one-way process
  • Interactivity for areas in which ground-truth labeling is difficult
  • Intelligibility and participation rather than autonomous systems as a basis for trust in AI systems

The Combination of Logic and Learning for Complex Areas of Application

 

  • Reducing data requirements through the use of existing knowledge to limit the search field
  • Taking into account expert and commonsense knowledge
  • Transparency and robustness from interpretable models

Cognitive Tutoring Systems

 

  • Use of explanations for training systems in basic and advanced vocational training
  • Contrastive explanations to distinguish between different characteristics
  • Understanding of class boundaries through targeted selection of near misses

Partners and Projects

 

ADA Lovelace Center for Analytics, Data and Applications

New competence center for data analytics and AI in industry
The ADA Lovelace Center uniquely combines AI research with AI applications in industry. Here the partners can network with each other, benefit from each other's know-how and work on joint projects

 

Project partner University of Bamberg

Prof. Dr. Ute Schmidheads the “Cognitive Systems” group at the University of Bamberg

 

A networking platform for industry and research

Machine Learning Forum – Focusing on artificial intelligence

 

TraMeExCo

TraMeExCo (Transparent Medical Expert Companion) is a project funded by Germany’s Federal Ministry of Education and Research (BMBF). Its purpose is to investigate and develop suitable new methods to enable robust and explainable machine learning in complementary applications (digital pathology, pain analysis, cardiology) in the field of medical engineering.

News

White paper

Certification of AI systems

We would like to create a white paper on the topic of certification of AI systems in the "Learning Systems" platform. This will build on the already published impulse paper on the subject.

Interview fortiss

Learning Data Minimization

At fortiss Prof. Ute Schmid will be involved in the lead project “Robust AI” and will bring her expertise in the field of inductive programming to the institute. In the following interview, she explains why this area is so important.