Making Emotions Measurable: Intelligent Analysis of Multimodal Data

Unconscious emotional reactions triggered by external impressions have a great influence on how we perceive our environment and make decisions. These processes are complex and occur subconsciously, making them difficult to detect.
To describe and understand these conscious and subconscious reactions, emotions are described in psychology in three categories:

  • Physical reactions: for example, an accelerated heartbeat or a fine film of sweat on the skin.
  • Subjective experience: how a person perceives a situation based on individual experience.
  • Behavior: e.g., a smile or the stressed typing on the keyboard.

Our goal is to analyze emotional reactions as holistically as possible and interpret them objectively by intelligently evaluating these three perspectives. Based on this, we can develop human-technology interfaces that can recognize the user's emotional state and respond appropriately.

To this end, we conduct studies on multimodal analysis of psychophysiological user response.

We support our clients and partners in better understanding unconscious user reactions by comprehensively analyzing and visualizing emotional states:

  • Customized study design
  • High accuracy through precise data acquisition
  •  Multimodal data fusion
  • AI based analysis of the multimodal data sets
  • Transparency by handing over the complete data set including analysis (no "black box")

Data Acquisition

High-quality data is the basis for the sound classification of emotions with AI-based algorithms. For this purpose, we develop scientifically sound study designs that are precisely tailored to the individual research question. Based on this, we select a subject collective that reflects the target group and define the multimodal modalities for data acquisition.

Multimodal data acquisition
© Fraunhofer IIS
Simple study implementation – fast results

Fast and flexible study setup: The multi-modal measurement booth enables efficient study setup with synchronized data acquisition. Specific measurement modalities can be added depending on requirements and customer needs.  

Reliable emotion recognition: the multimodal data is recorded in a synchronized manner so that stimulus and response can be accurately mapped to each other. AI-based fusion of the diverse input data gives us a more comprehensive and accurate picture of the subjects' state.

Fast data analysis: our data pipeline allows us to analyze pre-processed data, so we achieve efficient post-processing of data and fast results.

High data security: Both during data acquisition and analysis, we attach great importance to protecting the personal rights of the subjects. To this end, we take into account all necessary ethical guidelines and data protection regulations.
In addition, we use SHORE®, a software for camera-based emotion analysis in which only anonymized metadata is transmitted.

Infrastructure for Data Acquisition

Exposure cabin: 360° emotion analysis in closed, interference-free measurement cabin

  • Controlled environment for data acquisition, e.g. defined light and noise environment
  • Multimodal data acquisition and easy data synchronization
  • Optimal and interference-free modality setup
  • Simple and inexpensive data acquisition
  • Individual adaptation of the experimental setup, e.g. selection of modalities
Exposure booth
© Fraunhofer IIS/Bianca Möller

Driving simulator: emotion analysis tailored to the automotive environment

  • Easy and fast data acquisition during driving tasks and traffic situations
  • Driving simulators fully equipped with cameras, lighting and systems for multimodal biosignal acquisition
  • Medical reference systems and network to medical experts
  • Individual adaptation of the test setup to the requirements in the automotive environment
  • Optimal and interference-free modality setup
  • Infrastructure for compliance with confidentiality requirements
Driving simulator
© Fraunhofer IIS/Bianca Möller

Multimodal Data Fusion

A large number of available sensors for psychophysiological measurements brings multiple possibilities to capture emotional states. The challenge is to select, weight and fuse the right signals. For this purpose, we combine know-how in Deep Learning and Neural Networks with filtering methods for information fusion.
This allows us to exploit the full potential of multimodality compared to single evaluations. The goal is to optimize the prediction models through data fusion in such a way that an optimal classification of affective states can be achieved.

Another advantage of multimodal data analysis is its robustness to the failure of individual input parameters, especially in safety-critical applications. For example, the heart rate can be determined by a smart watch and simultaneously by a camera-based solution. The algorithm trusts the modality with the lowest measurement inaccuracy. If one of the two parameters fails, reliable results are still obtained, even if the quality of the prediction is no longer optimal.

Unsere Forschungsprojekte

 

SEMULIN – natural, multimodal interaction for automated driving

Development of a self-supporting natural human-machine interface for automated driving using multimodal input and output modes including facial expressions, gestures, gaze, and speech.

 

Multimodal Database for Driver Condition Recognition

  • Database on overexertion while driving
  • Multimodal data collection
  • Balanced subject group
 

ADA Lovelace Center for Analytics, Data and Applications

Explainable AI in medical technology and automotive applications

 

PainFaceReader – Long-term Monitoring System for Pain Detection

Development of an autonomous monitoring system that uses action units to automatically detect pain