AI series: Making machine learning explainable and transparent

2.12.2020 | Professor Ute Schmid has been at the head of the Project Group Explainable AI (EAI) since January 2020. Schmid and her team explore ways to help users understand machine learning.

The goal of artificial intelligence research is to emulate general principles of intelligent human behavior by means of algorithms. These algorithmic decisions are widely seen as lacking transparency. The EAI group researches methods to make the decisions reached by AI systems transparent and understandable. We spoke to Professor Schmid about her day-to-day work, her main research interests, the future, and the cooperations her Project Group Explainable AI is involved in. Im Interview beschreibt Prof. Schmid den Arbeitsalltag, die Forschungsschwerpunkte, die Zukunft und die Kooperationen ihrer Projektgruppe Erklärbare KI.

Interview with Professor Ute Schmidt

In the interview, Professor Schmid talks about her research fields and the Project Group Explainable AI. The conversation covers her day-to-day work and main research interests, plans for the future, and the cooperations her project group is involved in.


First of all, thanks for taking the time for this interview. Let’s get started. Prof. Schmid, you’re a certified psychologist and hold the Chair of Applied Computer Science for Cognitive Systems at the University of Bamberg. How do these two disciplines influence your research? Do they overlap at all? What benefits does that bring?
Prof. Ute Schmid: I’m so glad that I decided to study psychology and then also computer science at Technical University of Berlin. I wrote a thesis in both disciplines and was able to incorporate both perspectives in my doctoral work, which also had a larger experimental psychology component. After finishing my doctorate, I fully immersed myself in the field of AI methods of machine learning. In my day-to-day research work and teaching, I am so glad that I have knowledge of both disciplines. After all, the aim of AI research is to replicate the general principles of human intelligence and behavior using algorithms. In my own research and teaching, I benefit time and again from this additional perspective as well as from my knowledge of cognitive science theories and experimental psychology research.

Prof. Schmid, you head the Explainable AI project group, which is the product of a collaboration between the University of Bamberg and Fraunhofer IIS. Can you tell us more about this project group? Who is involved, where is it based, what’s a normal working day like?
Prof. Ute Schmid: Besides my full-time teaching position at the University of Bamberg, I spend a few hours running the project group. I’ve been working with Fraunhofer IIS for many years – supervising master’s theses as well as doctoral students. I was thrilled that we were able to further expand our collaboration and make it official through this Explainable AI project group. Currently, it’s me and a postdoc, Dr. Stefan Scheele, who divides his time between the Bamberg site, where we now have a nice office space, and the Erlangen-Tennenlohe site. Essentially, we take care of acquiring and processing projects. These are projects that the usual third-party funding sources put out to tender and – something I’m particularly pleased about – collaborations within Fraunhofer, which many colleagues approach us about. We’re already starting to collaborate, especially on machine learning projects, making sure that we really factor in explainability. After all, it’s becoming increasingly clear that explainability is essential if you want to apply AI research in practice.

And what about industry? Have you already seen some interest there?
Prof. Ute Schmid: Yes, we’ve actually had strong interest from industry. And although we do have master’s projects and some doctorates that would fit the bill, at the moment the coronavirus situation means that we still don’t have a project with third-party funding. But I’m optimistic that will change soon, especially since the companies are from such diverse industries.

What objectives is the project group pursuing? What key areas does your research focus on?
Prof. Ute Schmid: One focus – and at the moment, the most important one – is combining data-intensive deep learning methods with other, symbolic, interpretable machine learning methods to enable transparency and traceability for black-box decisions. We’re developing methods just to extract interpretable information from deep neural networks in the first place. At the same time, we’re working on methods to generate adaptive explanations based on the context. For example, who is the explanation for? The developer? An expert in the subject matter, or an end user? A quick visualization, a detailed verbal explanation, a prototypical example, a near-miss example that shows “if X and Y were different, this would no longer be an acceptable part,” etc. A wide range of different modalities can be especially helpful in explaining something effectively in a given context.

Do you have a specific example to better illustrate that?
Prof. Ute Schmid: Image-based diagnostics illustrate this nicely. We already have a BMBF project, TraMeExCo, in collaboration with Fraunhofer, which is about explainability for medical diagnoses. But you can illustrate it just as well with quality control in industry, which is often image-based as well. Let me use industrial quality assurance as an example: Say you have the final quality control check in production, and you need to inspect a wheel rim to decide whether it’s acceptable or should be scrapped. As a developer, you first want to know if the system recognizes something as a reject based on the right information. This is where visualization methods such as those being developed by colleagues at the Heinrich Hertz Institute, HHI, in Berlin can help. This is where LAP comes in. If a part is classified as scrap, it actually marks the damaged area – a scratch, a bubble or similar – and not the background, which would be sampling bias. But you could also explain to a quality engineer that although there is a scratch here, I did not classify this part as scrap because the scratch is less than 2.5 millimeters and in a hidden, non-load-bearing area. That’s just a made-up example, not necessarily from practice. That would be a verbal explanation. But alternatively, you might say that if the scratch had been a bit further in, then the part would have been scrapped. That’s a near-miss example to illustrate what I mean. I hope this has shown how different human explanation can be and what we would like to implement using machines.

Thanks for that example. Back to your project group: Can you tell us about collaborations with other organizations, companies, or other projects that you have established or are pursuing?
Prof. Ute Schmid: I can’t name any companies specifically, but some of them are automotive suppliers. Companies from completely different sectors such as the medical and pharmaceutical industries are also showing an interest. At Fraunhofer, we already have an excellent network of contacts for image sensor technology or SSE, especially within our own organization. We’re also collaborating on a project that looks at cognitive smart sensors. And we’re collaborating with Fraunhofer IWS in Dresden on explainable AI for synthetic data. Here, too, we’d like to move in the direction of explainability for time series classification, which makes sense for quality control in industrial production.

Explainability for synthetic data? Can you tell us more about that?
Prof. Ute Schmid: I can’t say any more because the proposal is just being submitted today.

Understandable. We’ll keep our fingers crossed for you. The project group is only just starting out with its research activities. What are the next steps? What’s still in store for 2020? Do you already have plans for 2021?
Prof. Ute Schmid: Currently, we are working very hard and enthusiastically on a project funded by BMBF and Stifterverband, called the AI Campus. It aims to prepare various AI topics for online learning units aimed at different target groups. Our topic fits in very well here. We were accepted with a project called “Explainable AI for engineering.” With it, we present different approaches with small learning units, hands-on examples and so on. We still have some work to do there. At the moment we’re also involved in several projects with third-party funding, not just in direct cooperation within Fraunhofer, but also BMBF proposals, several applications for Bavarian funding with colleagues from Nuremberg Tech, and also a BFG application. We make every effort to support our research with third-party funding wherever possible. One reason we’re in a strong position, of course, is because not only am I at the university, but so is Stefan Scheele. He has a lot of industry experience, but has more of a background in academic research. And that’s where our strength is. We’re of course delighted to be working on more and more industry projects with our Fraunhofer colleagues. I should also mention that I am currently supervising numerous doctoral students at Fraunhofer, almost all of them at the Tennenlohe site. One former doctoral student, Teena Hassan, did an excellent job at her thesis defense this summer, and the others will hopefully follow next year and the year after.

We’ll certainly keep our fingers crossed for you. For the last question, I would like to ask you to complete the following sentence. In ten years, our project group will be...
Prof. Ute Schmid: In an excellent position. We promote successful methods, development and transfer into practice for the third wave of AI – in other words, approaches to explainability and interactive, collaborative machine learning.

Thank you for speaking with us. We wish you and your project group every success and are very excited to see what you will achieve.
Prof. Ute Schmid: Thank you, and thanks for the thought-provoking questions.







Contact

Your questions and suggestions are always welcome!

Please do not hesitate to email us.

Stay connected

The newsletter

Sign up for the the Fraunhofer IIS Magazine newsletter and get the hotlist topics in one email.

Home page

Back to home page of Fraunhofer IIS Magazine