Comprehensible Artificial Intelligence

Transparency and Intelligibility of AI Systems

For uses of machine learning in practice, it is vital that such applications should be intelligible and explainable

Explainable AI is a key topic of current AI research – the “Third wave of AI,” following on from “Describing” (First wave: knowledge-based systems) and “Statistical learning” (Second wave). It is becoming increasingly obvious that purely data-driven machine learning is unsuitable in many areas of application, or not unless it is combined with further methods.

In collaboration with the University of Bamberg, Fraunhofer IIS has set up an “Comprehensible Artificial Intelligence” project group. Its purpose is to develop explainable machine learning methods:

  • We are working on hybrid approaches to machine learning that combine black-box methods, such as (deep) neural networks, with methods applied in interpretable machine learning (white-box methods). Such methods enable a combination of logic and learning – and, in particular, a type of learning that integrates human knowledge.
  • We are developing methods of interactive and incremental learning for areas of application in which there is very limited data available and the labeling of that data is problematic.
  • We are developing algorithms to generate multimodal explanations, particularly for a combination of visual and verbal explanations. For this purpose, we draw on research from cognitive science.

Current areas of application:

Explainable AI Videopodcast

Explore our video podcast and actively participate in shaping the future of AI in industry

Artificial Intelligence

Here you can find our publications structured by year.

Our Fields of Research

Comprehensible AI – Our Fields of Research

Partners and Projects


Project HIX

The goal of the HIX funding project is to develop and implement human-AI interaction in hybrid intelligence systems for bias and noise reduction and knowledge aggregation.

Duration: October 2021 - September 2023


Project hKI-Chemie

The goal of the hKI-Chemie project is supported data processing by AI systems in the chemical industry. The aim is to support employees in identifying process problems at an early stage and selecting suitable solutions.

Duration: June 2021 - June 2024



TraMeExCo (Transparent Medical Expert Companion) is a project funded by Germany’s Federal Ministry of Education and Research (BMBF). Its purpose is to investigate and develop suitable new methods to enable robust and explainable machine learning in complementary applications in the field of medical engineering.

Duration: September 2018 - August 2021


ADA Lovelace Center for Analytics, Data and Applications

New competence center for data analytics and AI in industry
The ADA Lovelace Center uniquely combines AI research with AI applications in industry. Here the partners can network with each other, benefit from each other's know-how and work on joint projects


Project partner University of Bamberg

Prof. Dr. Ute Schmidheads the “Cognitive Systems” group at the University of Bamberg


June 21, 2022 from 2:30 pm to 3:30 pm at i_space in Hall B4 @automatica

Ethics Round Table on June 21

The discussion  „Teaching AI – Opportunities and Challenges of educational institutions regarding responsible research and development today for the technological innovations of tomorrow” lead by Prof. Dr. med Alena Buyx, addresses the ethical issues associated with the development and use of AI-based technological innovations within the educational environment.  

Interview fortiss

Learning Data Minimization

At fortiss Prof. Ute Schmid will be involved in the lead project “Robust AI” and will bring her expertise in the field of inductive programming to the institute. In the following interview, she explains why this area is so important.

White paper

Certification of AI systems

We would like to create a white paper on the topic of certification of AI systems in the "Learning Systems" platform. This will build on the already published impulse paper on the subject.