Development and Implementation of human-AI Interaction in hybrid Intelligence Systems to reduce Biases

Project Goal

More and more companies are using data as a basis for important decisions. In the meantime, however, the data volumes are so large and complex that they are no longer manageable for a human. AI systems offer a possibility here to process the accumulating data more efficiently.

However, in order to develop these AI systems, the necessary knowledge of domain experts must first be transferred to the AI through labeled data. However, this process is prone to quality loss due to biases. These are systematically erroneous perceptions judgments and actions. These can be transferred unconsciously when labeling the data and thus lead to biased decision-making processes of the AI systems.

Therefore, the goal is to develop and implement the interaction between humans and AI in hybrid intelligence systems to reduce bias and noise and to aggregate knowledge.

Our Contribution

The Comprehensible AI project group develops methods for Explainable Artificial Intelligence (XAI). We focus on revealing decision mechanisms of models and thus identify biases. In identifying biases, we do not focus only on machine learning algorithms.

Instead, we go one step further:
We are aware that XAI algorithms can also induce biases in explanations. To address this problem, we extend existing algorithms and enrich them with formalized background knowledge. We are also looking for new solutions that take knowledge from humans and from data equally into account when reducing biases.

 

Project Partner

  • Fraunhofer IIS (Project group Comprehensible Artificial Intelligence)
  • vencortex UG
  • Greple GmbH