Explainable Machine Learning in Medicine

Transparent AI Decisions in Medical Technology

Objective

The aim of the BMBF’s TraMeExCo (Transparent Medical Expert Companion) project is to research and develop suitable new methods for robust and explainable AI (XAI) in complementary applications (digital pathology, pain analysis, cardiology) in the field of medical technology. The proposed system is to help doctors and clinical staff make diagnoses and treatment decisions.

 

Partners

University of Bamberg | Professorship for Cognitive Systems

  • The University of Bamberg focuses its activities in this area on the conception, implementation and testing of methods to explain diagnostic system decisions using local interpretable model-agnostic explanations (LIME), layer-wise relevance propagation (LRP) and inductive logic programming (ILP).

Fraunhofer HHI | Department of Video Communication and Applications

  • Fraunhofer HHI is further refining layer-wise relevance propagation (LRP) approaches.

Fraunhofer IIS | Smart Sensing and Electronics

  • The Digital Health Systems business unit investigates and implements the few-shot learning and heat map approaches for digital pathology. Long short-term neural networks help to determine heart rate variability in noisy EKG and PEG data.
  • The Facial Analysis Solutions business unit is collaborating with the University of Bamberg to research Bayesian deep learning methods using pain videos. Researchers make use of the Facial Action Coding System (FACS) as adopted by Ekman and Friesen, in which every movement of individual muscles (action unit), such as the contraction of the eyebrows, is described, interpreted and detected. The presence of certain action units, automatically detected, indicates pain.

Backround

The major challenges facing medical technology are

  • the reliable and understandable analysis, evaluation and interpretation of raw data (videos, EKG data, microscopy images), and
  • communicating the associated transparency and explanation of system decisions to clinical staff.

These efforts are underpinned by suitable AI methods, such as deep learning, Bayesian deep learning and few-shot learning.

Since the input data often exhibits distortion artifacts, for instance caused by shifting light conditions or noise, integrating uncertainty modeling plays a major role in determining the margin of error of predictions. Two scenarios of routine clinical care (pain analysis, pathology) are addressed in case studies.
 

Advancing explainable AI in Medicine

Tumor Budding
TraMeExCo – AU Classification
© Lucey, Patrick, et al. "Painful Data: The UNBC-McMaster Shoulder Pain Expression Archive Database." Face and Gesture 2011. IEEE, 2011.
Action Unit Classification (AU) to estimate pain

Satisfying the clinical requirements of medical assistance systems hinges on making system decisions transparent and clearly explainable:

  • To this end, we draw on our experience in applying black-box methods such as deep neural networks (DNNs) to achieve a high degree of sensitivity (hit ratio) and specificity (few false alarms) in detecting and training classifiers.
  • Integrating learning approaches such as Bayesian deep learning helps model uncertainties in the system and data (epistemic and aleatoric uncertainties).
  • Few-shot approaches can use a small number of data sets to train machine learning models in terms of classification, identification and segmentation.
  • To enhance the performance of neural networks, we furnish them with long short-term memory.
  • To make the decisions ultimately explainable, we investigate and apply methods such as heat maps, layer-wise relevance propagation (LRP) and inductive logic programming (ILP).
     

Publications

 

 

Press (german)