Development and Implementation of human-AI Interaction in hybrid Intelligence Systems to reduce Biases

Project Goal

More and more companies are using data as a basis for important decisions. In the meantime, however, the data volumes are so large and complex that they are no longer manageable for a human. AI systems offer a possibility here to process the accumulating data more efficiently.

However, in order to develop these AI systems, the necessary knowledge of domain experts must first be transferred to the AI through labeled data. However, this process is prone to quality loss due to biases. These are systematically erroneous perceptions judgments and actions. These can be transferred unconsciously when labeling the data and thus lead to biased decision-making processes of the AI systems.

Therefore, the goal was to develop and implement the interaction between humans and AI in hybrid intelligence systems to reduce bias and noise and to aggregate knowledge.

Our Contribution

The Comprehensible AI project group developed methods for Explainable Artificial Intelligence (XAI). We focused on revealing decision mechanisms of models and thus identify biases. To identify bias, we went beyond machine learning algorithms.

Instead, we went a step further:
We were aware that XAI algorithms can also induce biases in explanations. To address this problem, we extended existing algorithms and enriched them with formalized background knowledge. We also looked for new solutions that take into account both human and data knowledge to reduce bias.

Project Partner

  • Fraunhofer IIS (Project group Comprehensible Artificial Intelligence)
  • vencortex UG
  • Greple GmbH