SEMULIN – Natural, Multimodal Interaction for Automated Driving

Machine Learning in Human Machine Interfaces

Goal

Development of a self-supporting natural human-machine interface for automated driving using multimodal input and output modes including facial expressions, gestures, gaze, and speech.

In conjunction with considerations relating to the immediate environment (vehicle interior, etc.), the result is a holistic development approach for a human-machine interface (HMI) tailored to the human senses, based on machine learning approaches.

The system facilitates interaction while enhancing user experience and acceptance for automonous driving in all areas. The methods to measure user satisfaction developed in the course of the project will form the basis for other projects with a similar design.

Motivation and Challenge

The user interface has a key role to play in automated driving: in light of the growing complexity of systems and the demands placed on them, user interfaces must be able to support a range of functions, process information, and offer a high degree of operator friendliness.

At present, there are various constraints limiting natural interaction between driver, passenger, and vehicle, particularly where a shift between the various modes (gestures, language, lighting, speaker, etc.) or combinations thereof is concerned.

To enable human-centered interaction of this sort, it is necessary to take these different modes – along with contextual information – into account, and combine them in a meaningful way. A particular challenge is correctly identifying the user’s precise intentions and generating actions on the part of the system accordingly.

A human-machine Interface with Intelligent Sensor Interpretation and Data Fusion

In order to develop a human-centered human-machine interface featuring a tailored system architecture with due consideration of the overall context, we examine all available modes so as to be able to intelligently interpret and combine the resulting aggregated sensor data.

To this end, we employ established technologies such as our SHORE® analysis software for video-based emotion recognition and additional integrated sensors to pre-process and intelligently interpret the data. Rules-based and AI-based methods and their multimodal implementation allow us to form connections within the data.

Intelligent sensor interpretation and fusion delivers information on the state of the user, their intentions, and their potential reactions. Novel approaches such as interactive learning are employed in order to constantly adapt the system to the needs of each user.

Partners

Elektrobit Automotive GmbH (project coordinator)

  • Concept, design, HMI, system architecture, demonstrator setup

Fraunhofer IIS | Smart Sensing and Electronics Division, Audio and Media Technologies Division

  • Video-based facial expression and emotion recognition
  • Speech platform, dialog systems, speaker recognition

audEERING GmbH

  • Affective voice computing, machine learning, speech-based emotion recognition

Eesy Innovation GmbH

  • Output modes, user-centered controllable lighting solution

Blickshift

  • Eye tracking, multimodal framework

Infineon Technologies AG

  • HPC architectures, intelligent sensors, gesture recognition, safety concepts

Ulm University | Institute of Media Informatics, Institute of Psychology and Education

  • Media Informatics, HMI, Multimodality, ELSI
  • Human Factors, psychological modeling, empirics

ELSI

In order to maximize acceptance for the system, throughout the project the associated ethical, legal, and social implications (ELSI) will be examined, assessed, and incorporated into the research approach in accordance with the applicable guidelines.

For further information concerning the SEMULIN joint project please contact Jaspar Pahl.

jaspar.pahl@iis.fraunhofer.de

 

Driver Monitoring Systems

Multimodal Analysis for In Cabin Sensing: more Safety and Comfort while driving

 

SHORE® Face Detection Software – Empathy for your Technology

 

Subject studies

Correctly interpeting
unconscious reactions

For the implicit analysis of customer, user and patient reactions, we conduct subject studies as an R&D service.