When we interviewed you for the magazine in 2020, you explained the setup of the Project Group for Comprehensible AI. At that time, the group consisted of you and a postdoc. What has it gone on to achieve since then?
At the start, we received funding from the German Federal Ministry of Education and Research (BMBF) and Stifterverband for an AI Campus project. The objective was to finance the development and implementation of online learning opportunities in the field of AI. We in CAI launched an online course entitled “Explainable AI for Engineering,” for which there is plenty of demand. Overall, we’ve broadened our scope beyond our focus on explainable AI. For example, we look into topics related to the “third wave” of AI, which, in addition to explainability, includes methods of interactivity and especially examines hybrid approaches that combine machine learning and knowledge-based methods. We’ve developed several demonstrators on these topics. The use cases come from the business world, too – we have good contacts with companies in the automotive sector and in digital medicine. In both areas, our focus is on hybrid, explainable and interactive AI for image-based diagnostics. For example, we combine visual explanations with more complex linguistic explanations. We successfully submitted our approaches here to various conferences this year – for example, the Innovative Applications of AI (IAAI-22), which is part of AAAI-22.
We were also fortunate to be involved in a large BMBF project on human-centered AI (hKI-Chemie) and were able to hire another employee for it, Emanuel Slany. He had previously worked at Fraunhofer as a student assistant, and now he’s doing his doctorate in the field of explainable AI.