Comprehensible Artificial Intelligence

Transparency and Intelligibility of AI Systems

For uses of machine learning in practice, it is vital that such applications should be intelligible and explainable

Explainable AI is a key topic of current AI research – the “Third wave of AI,” following on from “Describing” (First wave: knowledge-based systems) and “Statistical learning” (Second wave). It is becoming increasingly obvious that purely data-driven machine learning is unsuitable in many areas of application, or not unless it is combined with further methods.

In collaboration with the University of Bamberg, Fraunhofer IIS has set up an “Comprehensible Artificial Intelligence” project group. Its purpose is to develop explainable machine learning methods:

  • We are working on hybrid approaches to machine learning that combine black-box methods, such as (deep) neural networks, with methods applied in interpretable machine learning (white-box methods). Such methods enable a combination of logic and learning – and, in particular, a type of learning that integrates human knowledge.
  • We are developing methods of interactive and incremental learning for areas of application in which there is very limited data available and the labeling of that data is problematic.
  • We are developing algorithms to generate multimodal explanations, particularly for a combination of visual and verbal explanations. For this purpose, we draw on research from cognitive science.

Current areas of application:

Our Fields of Research

Hybrid AI for Complex Application Domains

  • Combining logic and learning, knowledge-based and data-driven AI
  • Neuro-symbolic AI
  • Reducing data requirements through the use of existing knowledge to limit the search field
  • Taking into account expert and common sense knowledge
  • Transparency and robustness from interpretable models

Interactive Learning by Means of Reciprocal Explanations

  • Human-AI partnerships based on human-in-the-loop systems
  • Explainable AI as an opportunity to control machine learning through human knowledge, rather than as a one-way process
  • Interactivity for areas in which ground-truth labeling is difficult
  • Intelligibility and participation rather than autonomous systems as a basis for trust in AI systems

Interpretable Machine Learning

 

  • Inductive logic programming (ILP)
  • Statistical relational learning
  • Probabilistic logic programming
  • Learning of fuzzy rules
  • Hybrid combination of white-box and black-box learning
  • Methods of rule extraction

 

Generation of Multimodal Explanations

 

  • Context-sensitive approach rather than one size fits all
  • Combination of visual and verbal explanations
  • Contrastive examples, especially near misses
  • Explanation by means of prototypes

Intelligent Tutor Systems (ITS)

 

  • Use of explanations for training systems in basic and advanced vocational training
  • Contrastive explanations to distinguish between different characteristics
  • Understanding of class boundaries through targeted selection of near misses

Partners and Projects

 

Project HIX

The goal of the HIX funding project is to develop and implement human-AI interaction in hybrid intelligence systems for bias and noise reduction and knowledge aggregation.

Duration: October 2021 - September 2023

 

Project hKI-Chemie

The goal of the hKI-Chemie project is supported data processing by AI systems in the chemical industry. The aim is to support employees in identifying process problems at an early stage and selecting suitable solutions.

Duration: June 2021 - June 2024

 

TraMeExCo

TraMeExCo (Transparent Medical Expert Companion) is a project funded by Germany’s Federal Ministry of Education and Research (BMBF). Its purpose is to investigate and develop suitable new methods to enable robust and explainable machine learning in complementary applications in the field of medical engineering.

Duration: September 2018 - August 2021

 

ADA Lovelace Center for Analytics, Data and Applications

New competence center for data analytics and AI in industry
The ADA Lovelace Center uniquely combines AI research with AI applications in industry. Here the partners can network with each other, benefit from each other's know-how and work on joint projects

 

Project partner University of Bamberg

Prof. Dr. Ute Schmidheads the “Cognitive Systems” group at the University of Bamberg

News

June 21, 2022 from 2:30 pm to 3:30 pm at i_space in Hall B4 @automatica

Ethics Round Table on June 21

The discussion  „Teaching AI – Opportunities and Challenges of educational institutions regarding responsible research and development today for the technological innovations of tomorrow” lead by Prof. Dr. med Alena Buyx, addresses the ethical issues associated with the development and use of AI-based technological innovations within the educational environment.  

Interview fortiss

Learning Data Minimization

At fortiss Prof. Ute Schmid will be involved in the lead project “Robust AI” and will bring her expertise in the field of inductive programming to the institute. In the following interview, she explains why this area is so important.

White paper

Certification of AI systems

We would like to create a white paper on the topic of certification of AI systems in the "Learning Systems" platform. This will build on the already published impulse paper on the subject.