Publikationen

Joint Classification and Trajectory Regression of Online Handwriting using a Multi-Task Learning Approach

Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler

In: WACV 2022

Multivariate Time Series (MTS) classification is important in various applications such as signature verification, person identification, and motion recognition. In deep learning these classification tasks are usually learned using the cross-entropy loss. A related yet different task is predicting trajectories observed as MTS. Important use cases include handwriting reconstruction, shape analysis, and human pose estimation. The goal is to align an arbitrary dimensional time series with its ground truth as accurately as possible while reducing the error in the prediction with a distance loss and the variance with a similarity loss. Although learning both losses with Multi-Task Learning (MTL) helps to improve trajectory alignment, learning often remains difficult as both tasks are contradictory. We propose a novel neural network architecture for MTL that notably improves the MTS classification and trajectory regression performance in online handwriting (OnHW) recognition. We achieve this by jointly learning the cross-entropy loss in combination with distance and similarity losses. On an OnHW task of handwritten characters with multivariate inertial and visual data inputs we are able to achieve crucial improvements (lower error with less variance) of trajectory prediction while still improving the character classification accuracy in comparison to models trained on the individual tasks.

Searching for Soccer Scenes using Siamese Neural Networks

Luca Reeb

In: Towards Data Science 2022

We have access to a large soccer database, containing a seasons worth of tracking-data, i.e. player trajectories, game statistics and expert-annotated events like pass or shot at goal from the German Bundesliga. While events allow you to find set-pieces like corner-kicks, the results are coarsely grained in that they do not consider how the players acted during the event. Also, some situations of potential interest, like counter attack, are not represented by an event. To enable fine-grained analysis of soccer matches, player movement (i.e. tracking-data) has to be considered.

A Combined Ray Tracing Simulation Environment for Hybrid 5G and GNSS Positioning

Ivana Lukčina, Phuong Bich Duong, Katrin Dietmayer, Sheikh Usman Ali, Sebastian Kram, Jochen Seitz and Wolfgang Felber

In: ICL-GNSS 2021 WiP Proceedings,

GNSS based radio frequency (RF) positioning has to cope with challenging propagation conditions, like non-line of sight (NLoS), multipath, and sparse signal availability. The introduction of the fifth-generation of mobile telecommunications technology (5G) with an improved Positioning Reference Signal (PRS) structure will be a key enabler for more reliable positioning solutions with increased availability and advanced signaling. Nevertheless, 5G-assisted positioning faces similar challenges. Therefore, to analyze the possibilities of 5G-assisted positioning, a suitable simulation environment is required. In this paper, a simulation environment based on a Ray Tracing (RT) channel model that emulates Global Navigation Satellite System (GNSS) signals is introduced, validated and extended to simulate 5G PRSs, and Sounding Reference Signals (SRSs). Additionally, the environment is applied for hybrid positioning by sensor data fusion with real-world recorded Global Positioning System (GPS) L1CA and Galileo E1BC GNSS signals under several severe conditions like strong building blockage and outdoor-indoor transition. It is shown that the simulation environment with various three-dimensional (3D)-modeled objects represents 5G signals sufficiently well when the line of sight (LoS) is visible. Additionally, the simulated 5G signals improve the GNSS positioning accuracies when combined in a hybrid positioning approach, especially under complex channel conditions, like in typical industrial environments.

Validation of Player and Ball Tracking with a Local Positioning System

In: Sensors 2021, 21(4)

The aim of this study was the validation of player and ball position measurements of Kinexon’s local positioning system (LPS) in handball and football. Eight athletes conducted a sport-specific course (SSC) and small sided football games (SSG), simultaneously tracked by the LPS and an infrared camera-based motion capture system as reference system. Furthermore, football shots and handball throws were performed to evaluate ball tracking. The position root mean square error (RMSE) for player tracking was 9 cm for SSCs, the instantaneous peak speed showed a percentage deviation from the reference system of 0.7–1.7% for different exercises. The RMSE for SSGs was 8 cm. Covered distance was overestimated by 0.6% in SSCs and 1.0% in SSGs. The 2D RMSE of ball tracking was 15 cm in SSGs, 3D position errors of shot and throw impact locations were 17 cm and 21 cm. The methodology for the validation of a system’s accuracy in sports tracking requires extensive attention, especially in settings covering both, player and ball measurements. Most tracking errors for player tracking were smaller or in line with errors found for comparable systems in the literature. Ball tracking showed a larger error than player tracking. Here, the influence of the positioning of the sensor must be further reviewed. In total, the accuracy of Kinexon’s LPS has proven to represent the current state of the art for player and ball position detection in team sports.

Off-line Evaluation of Indoor Positioning Systems in Different Scenarios: The Experiences from IPIN 2020 Competition

Francesco Potortí, Joaquín Torres-Sospedra, Darwin Quezada-Gaibor, Antonio Ramón Jiménez, Fernando Seco, Antoni Pérez-Navarro, Miguel Ortiz, Ni Zhu, Valerie Renaudin, Ryosuke Ichikari, Ryo Shimomura, Nozomu Ohta, Satsuki Nagae, Takeshi Kurata, Dongyan Wei, Xinchun Ji, Wenchao Zhang, Sebastian Kram, Maximilian Stahlke, Christopher Mutschler, Antonino Crivello, Paolo Barsocchi, Michele Girolami, Filippo Palumbo, Ruizhi Chen, Yuan Wu, Wei Li, Yue Yu, Shihao Xu, Lixiong Huang, Tao Liu, Jian Kuang, Xiaoji Niu, Takuto Yoshida, Yoshiteru Nagata, Yuto Fukushima, Nobuya Fukatani, Nozomi Hayashida, Yusuke Asai, Kenta Urano, Wenfei Ge, Nien-Ting Lee, Shih-Hau Fang Senior Member, IEEE, You-Cheng Jie, Shawn-Rong Young, Ying-Ren Chien Senior Member, IEEE, Chih-Chieh Yu, Chengqi Ma Bang Wu Wei Zhang, Yankun Wang, Yonglei Fan, Stefan Poslad, David R. Selviah Member, IEEE, Weixi Wang, Hong Yuan, Yoshitomo Yonamoto, Masahiro Yamaguchi, Tomoya Kaichi, Baoding Zhou, Xu Liu, Zhining Gu, Chengjing Yang, Zhiqian Wu, Doudou Xie, Can Huang, Lingxiang Zheng, Ao Peng , Ge Jin, Qu Wang, Haiyong Luo, Hao Xiong, Linfeng Bao, Pushuo Zhang, Fang Zhao, Chia-An Yu, Chun-Hao Hung, Leonid Antsfeld, Boris Chidlovskii, Haitao Jiang, Ming Xia, Dayu Yan, Yuhang Li, Yitong Dong, Ivo Silva, Cristiano Pendão, Filipe Meneses Member, IEEE, Maria João Nicolau, António Costa Member, IEEE, Adriano Moreira Member, IEEE, Cedric De Cock, David Plets Member,IEEE, Miroslav Opiela, Jakub Džama, Liqiang Zhang, Hu Li, Boxuan Chen, Yu Liu, Seanglidet Yean, Bo Zhi Lim, Wei Jie Teo, Bu Sung Lee Senior Member, IEEE and Hong Lye Oh

In: 2021 IEEE SENSORS JOURNAL 

Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.

Estimating TOA Reliability with Variational Autoencoders

Maximilian Stahlke, Sebastian Kram, Felix Ott, Tobias Feigl, and Christopher Mutschler

In: 2021 IEEE SENSORS JOURNAL 

Radio frequency (RF)-based localization yields centimeter-accurate positions under mild propagation conditions. However, propagation conditions predominant in indoor environments (e.g. industrial production) are often challenging as signal blockage, diffraction and dense multipath lead to errors in the time of flight (TOF) estimation and hence to a degraded localization accuracy. A major topic in high-precision RF-based localization is the identification of such anomalous signals that negatively affect the localization performance, and to mitigate the errors introduced by them. As such signal and error characteristics depend on the environment, data-driven approaches have shown to be promising. However, there is a trade-off to a bad generalization and a need for an extensive and time-consuming recording of training data associated with it. We propose to use generative deep learning models for out-of-distribution detection based on channel impulse responses (CIRs). We use a Variational Autoencoder (VAE) to predict an anomaly score for the channel of a TOF-based Ultra-wideband (UWB) system. Our experiments show that a VAE trained only on line-of-sight (LOS) training data generalizes well to new environments and detects non-line-of-sight CIRs with an accuracy of 85%. We also show that integrating our anomaly score into a TOF-based extended Kalman filter (EKF) improves tracking performance by over 25%.

Can You Trust Your Autonomous Car? Interpretable and Verifiably Safe Reinforcement Learning

Lukas Schmidt, Georgios Kontes, Axel Plinge and Christopher Mutschler

In: 2021 IEEE Intelligent Vehicles Symposium (IV21)

Safe and efficient behavior are the key guiding principles for autonomous vehicles. Manually designed rule-based systems need to act very conservatively to ensure a safe operation. This limits their applicability to real-world systems. On the other hand, more advanced behaviors, i.e., policies, learned through means of reinforcement learning (RL) suffer from non-interpretability as they are usually expressed by deep neural networks that are hard to explain. Even worse, there are no formal safety guarantees for their operation. In this paper we introduce a novel pipeline that builds on recent advances in imitation learning and that can generate safe and efficient behavior policies. We combine a reinforcement learning step that solves for safe behavior through the introduction of safety distances with a subsequent innovative safe extraction of decision tree policies. The resulting decision tree is not only easy to interpret, it is also safer than the neural network policy trained for safety. Additionally, we formally prove the safety of trained RL agents for linearized system dynamics, showing that the learned and extracted policy successfully avoids all catastrophic events.

Deep Siamese Metric Learning: A Highly Scalable Approach to Searching Unordered Sets of Trajectories

Christoffer Löffler, Luca Reeb, Daniel Dzibela, Robert Marzilger, Nivolas Witt, Björn M. Eskofier, Christopher Mutschler

In: ACM Transactions on Inteligent Systems and Technology 2021

This work proposes metric learning for fast similarity-based scene retrieval of unstructured ensembles of trajectory data from large databases. We present a novel representation learning approach using Siamese Metric Learning that approximates a distance preserving low-dimensional representation and that learns to estimate reasonable solutions to the assignment problem. To this end, we employ a Temporal Convolutional Network architecture that we extend with a gating mechanism to enable learning from sparse data, leading to solutions to the assignment problem exhibiting varying degrees of sparsity. Our experimental results on professional soccer tracking data provides insights on learned features and embeddings, as well as on generalization, sensitivity, and network architectural considerations. Our low approximation errors for learned representations and the interactive performance with retrieval times several magnitudes smaller shows that we outperform previous state of the art.

 

The OnHW Dataset: Online Handwriting Recognition from IMU-Enhanced Ballpoint Pens with Machine Learning

Felix Ott, Mohamad Wehbi, Tim Hamann, Jens Barth, Björn Eskofier, Christopher Mutschler

In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2020, Article No.: 92, https://doi.org/10.1145/3411842

This paper presents a handwriting recognition (HWR) system that deals with online character recognition in real-time. Our sensor-enhanced ballpoint pen delivers sensor data streams from triaxial acceleration, gyroscope, magnetometer and force signals at 100 Hz. As most existing datasets do not meet the requirements of online handwriting recognition and as they have been collected using specific equipment under constrained conditions, we propose a novel online handwriting dataset acquired from 119 writers consisting of 31,275 uppercase and lowercase English alphabet character recordings (52 classes) as part of the UbiComp 2020 Time Series Classification Challenge. Our novel OnHW-chars dataset allows for the evaluations of uppercase, lowercase and combined classification tasks, on both writer-dependent (WD) and writer-independent (WI) classes and we show that properly tuned machine learning pipelines as well as deep learning classifiers (such as CNNs, LSTMs, and BiLSTMs) yield accuracies up to 90 % for the WD task and 83 % for the WI task for uppercase characters. Our baseline implementations together with the rich and publicly available OnHW dataset serve as a baseline for future research in that area.

 

 

RNN-Aided Human Velocity Estimation from a Single IMU

Tobias Feigl, Sebastian Kram, Philipp Woller, Ramiz H. Siddiqui, Michael Philippsen, Christopher Mutschler

In: Sensors 2020, 20(13), 3656; https://doi.org/10.3390/s20133656

Pedestrian Dead Reckoning (PDR) uses inertial measurement units (IMUs) and combines velocity and orientation estimates to determine a position. The estimation of the velocity is still challenging, as the integration of noisy acceleration and angular speed signals over a long period of time causes large drifts. Classic approaches to estimate the velocity optimize for specific applications, sensor positions, and types of movement and require extensive parameter tuning. Our novel hybrid filter combines a convolutional neural network (CNN) and a bidirectional recurrent neural network (BLSTM) (that extract spatial features from the sensor signals and track their temporal relationships) with a linear Kalman filter (LKF) that improves the velocity estimates. Our experiments show the robustness against different movement states and changes in orientation, even in highly dynamic situations. We compare the new architecture with conventional, machine, and deep learning methods and show that from a single non-calibrated IMU, our novel architecture outperforms the state-of-the-art in terms of velocity (≤0.16 m/s) and traveled distance (≤3 m/km). It also generalizes well to different and varying movement speeds and provides accurate and precise velocity estimates

 

ViPR: Visual-Odometry-aided Pose Regression for 6DoF Camera Localization

Felix Ott, Tobias Feigl, Christoffer Löffler, Christopher Mutschler

In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

Visual Odometry (VO) accumulates a positional drift in long-term robot navigation tasks. Although Convolutional Neural Networks (CNNs) improve VO in various aspects, VO still suffers from moving obstacles, discontinuous observation of features, and poor textures or visual information. While recent approaches estimate a 6DoF pose either directly from (a series of) images or by merging depth maps with optical flow (OF), research that combines absolute pose regression with OF is limited.We propose ViPR, a novel modular architecture for longterm 6DoF VO that leverages temporal information and synergies between absolute pose estimates (from PoseNet-like modules) and relative pose estimates (from FlowNet-based modules) by combining both through recurrent layers. Experiments on known datasets and on our own Industry dataset show that our modular design outperforms state ofthe art in long-term navigation tasks.

 

Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-scale Industry Environments

Tobias Feigl, Andreas Porada, Steve Steiner, Christoffer Löffler, Christopher Mutschler, Michael Philippsen

In: Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 1: GRAPP, ISBN 978-989-758-402-2, pages 307-318. DOI: 10.5220/0008989903070318

Augmented Reality (AR) systems are envisioned to soon be used as smart tools across many Industry 4.0 scenarios. The main promise is that such systems will make workers more productive when they can obtain additional situationally coordinated information both seemlessly and hands-free. This paper studies the applicability of today’s popular AR systems (Apple ARKit, Google ARCore, and Microsoft Hololens) in such an industrial context (large area of 1,600m2, long walking distances of 60m between cubicles, and dynamic environments with volatile natural features). With an elaborate measurement campaign that employs a sub-millimeter accurate optical localization system, we show that for such a context, i.e., when a reliable and accurate tracking of a user matters, the Simultaneous Localization and Mapping (SLAM) techniques of these AR systems are a showstopper. Out of the box, these AR systems are far from useful even for normal motion behavior. They accumulate an average error of about 17 m per 120m, with a scaling error of up to 14.4cm/m that is quasi-directly proportional to the path length. By adding natural features, the tracking reliability can be improved, but not enough.

 

NLOS Detection using UWB Channel Impulse Responses and Convolutional Neural Networks

Maximilian Stahlke, Sebastian Kram, Christopher Mutschler, Thomas Mahr

In: 2020 International Conference on Localization and GNSS (ICL-GNSS)

Indoor environments often pose challenges to RFbased positioning systems. Typically, objects within the environment influence the signal propagation due to absorption, reflection, and scattering effects. This results in errors in the estimation of the time or arrival (TOA) and hence leads to errors in the position estimation. Recently, different approaches based on classical, feature-based machine learning (ML) have successfully detected such obstructions based on CIRs of ultra wideband (UWB) positioning systems.This paper applies different convolutional neural network architectures (ResNet, Encoder, FCN) to detect non line-of-sight (NLOS) channel conditions directly from the CIR raw data. A realistic measurement campaign is used to train and evaluate the algorithms. The proposed methods highly outperform the featurebased ML baselines while still using low network complexities. We also show that the models generalize well to unknown receivers and environments and that positioning filters benefit significantly from the identification of NLOS measurements.

 

Real-Time Gait Reconstruction For Virtual Reality Using a Single Sensor

Tobias Feigl, Lisa Gruner, Christopher Mutschler, Daniel Roth

In: 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)

Embodying users through avatars based on motion tracking and reconstruction is an ongoing challenge for VR application developers. High quality VR systems use full-body tracking or inverse kinematics to reconstruct the motion of the lower extremities and control the avatar animation. Mobile systems are limited to the motion sensing of head-mounted displays (HMDs) and typically cannot offer this.We propose an approach to reconstruct gait motions from a single head-mounted accelerometer. We train our models to map head motions to corresponding ground truth gait phases. To reconstruct leg motion, the models predict gait phases to trigger equivalent synthetic animations. We designed four models: a threshold-based, a correlation-based, a Support Vector Machine (SVM) -based and a bidirectional long-term short-term memory (BLSTM) -based model. Our experiments show that, while the BLSTM approach is the most accurate, only the correlation approach runs on a mobile VR system in real time with sufficient accuracy. Our user study with 21 test subjects examined the effects of our approach on simulator sickness and showed significantly less negative effects on disorientation.

 

A Sense of Quality for Augmented Reality Assisted Process Guidance

Anes Redzepagic, Christoffer Löffler, Tobias Feigl, Christopher Mutschler

In: 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)

The ongoing automation of modern production processes requires novel human-computer interaction concepts that support employees in dealing with the unstoppable increase in time pressure, cognitive load, and the required fine-grained and process-specific knowledge. Augmented Reality (AR) systems support employees by guiding and teaching work processes. Such systems still lack a precise process quality analysis (monitoring), which is, however, crucial to close gaps in the quality assurance of industrial processes.We combine inertial sensors, mounted on work tools, with AR headsets to enrich modern assistance systems with a sense of process quality. For this purpose, we develop a Machine Learning (ML) classifier that predicts quality metrics from a 9-degrees of freedom inertial measurement unit, while we simultaneously guide and track the work processes with a HoloLens AR system. In our user study, 6 test subjects perform typical assembly tasks with our system. We evaluate the tracking accuracy of the system based on a precise optical reference system and evaluate the classification of each work step quality based on the collected ground truth data. Our evaluation shows a tracking accuracy of fast dynamic movements of 4.92mm and our classifier predicts the actions carried out with mean F1 value of 93.8% on average.

 

High-Speed Collision Avoidance using Deep Reinforcement Learning and Domain Randomization for Autonomous Vehicles

Georgios D. Kontes, Daniel D. Scherer, Tim Nisslbeck, Janina Fischer, Christopher Mutschler

In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)

Recently, deep neural networks trained with Imitation-Learning techniques have managed to successfully control autonomous cars in a variety of urban and highway environments. One of the main limitations of policies trained with imitation learning that has become apparent, however, is that they show poor performance when having to deal with extreme situations at test time- like high-speed collision avoidance - since there is not enough data available from such rare cases during training. In our work, we take the stance that training complex active safety systems for vehicles should be performed in simulation and the transfer of the learned driving policy to the real vehicle should be performed utilizing simulation to-reality transfer techniques. To communicate this idea, we setup a high-speed collision avoidance scenario in simulation and train the safety system with Reinforcement Learning. We utilize Domain Randomization to enable simulation-to-reality transfer. Here, the policy is not trained on a single version of the setup but on several variations of the problem, each with different parameters. Our experiments show that the resulting policy is able to generalize much better to different values for the vehicle speed and distance from the obstacle compared to policies trained in the non-randomized version of the setup.

 

IALE: Imitating Active Learner Ensembles

Christoffer Löffler, Karthik Ayyalasomayajula, Sascha Riechel, Christopher Mutschler

In: Cornell University

Active learning (AL) prioritizes the labeling of the most informative data samples. However, the performance of AL heuristics depends on the structure of the underlying classifier model and the data. We propose an imitation learning scheme that imitates the selection of the best expert heuristic at each stage of the AL cycle in a batch-mode pool-based setting. We use DAGGER to train the policy on a dataset and later apply it to datasets from similar domains. With multiple AL heuristics as experts, the policy is able to reflect the choices of the best AL heuristics given the current state of the AL process. Our experiment on well-known datasets show that we both outperform state of the art imitation learners and heuristics.

 

Recipes for Post-training Quantization of Deep Neural Networks

Ashutosh Mishra, Christoffer Löffler, Axel Plinge

In: Workshop on Energy Efficient Machine Learning and Cognitive Computing; Saturday, December 05, 2020 Virtual (from San Jose, California, USA)

Given the presence of deep neural networks (DNNs) in all kinds of applications, the question of optimized deployment is becoming increasingly important. One important step is the automated size reduction of the model footprint. Of all the methods emerging, post-training quantization is one of the simplest to apply. Without needing long processing or access to the training set, a straightforward reduction of the memory footprint by an order of magnitude can be achieved. A difficult question is which quantization methodology to use and how to optimize different parts of the model with respect to different bit width. We present an in-depth analysis on different types of networks for audio, computer vision, medical and hand-held manufacturing tools use cases; Each is compressed with fixed and adaptive quantization and fixed and variable bit width for the individual tensors.

 

Automated Quality Assurance for Hand-held Tools via Embedded Classification and AutoML

Christoffer Löffler, Christian Nickel, Christopher Sobel, Daniel Dzibela, Jonathan Braat, Benjamin Gruhler, Philipp Woller, Nicolas Witt and Christopher Mutschler

In: ECML 2020

Despite the ongoing automation of modern production pro-
cesses manual labor continues to be necessary due to its flexibility and
ease of deployment. Automated processes assure quality and traceability,
yet manual labor introduces gaps into the quality assurance process. This
is not only undesirable but even intolerable in many cases.
We introduce a process monitoring system that uses inertial, magnetic
field and audio sensors that we attach as add-ons to hand-held tools. The
sensor data is analyzed via embedded classification algorithms and our
system directly provides feedback to workers during the execution of work processes. We outline the special requirements caused by vastly different tools and show how to automatically train and deploy new ML models.

A Deep Learning Approach to Position Estimation from Channel Impulse Responses

Arne Niitsoo, Thorsten Edelhäußer, Ernst Eberlein, Niels Hadaschik, Christopher Mutschler

In: Sensors 2019, 19(5), 1064; https://doi.org/10.3390/s19051064

Radio-based locating systems allow for a robust and continuous tracking in industrial environments and are a key enabler for the digitalization of processes in many areas such as production, manufacturing, and warehouse management. Time difference of arrival (TDoA) systems estimate the time-of-flight (ToF) of radio burst signals with a set of synchronized antennas from which they trilaterate accurate position estimates of mobile tags. However, in industrial environments where multipath propagation is predominant it is difficult to extract the correct ToF of the signal. This article shows how deep learning (DL) can be used to estimate the position of mobile objects directly from the raw channel impulse responses (CIR) extracted at the receivers. Our experiments show that our DL-based position estimation not only works well under harsh multipath propagation but also outperforms state-of-the-art approaches in line-of-sight situations.

 

Sick Moves! Motion Parameters as Indicators of Simulator Sickness

Tobias Feigl, Daniel Roth, Stefan Gradl, Markus Wirth, Marc Erich Latoschik, Björn M. Eskofier, Michael Philippsen, Christopher Mutschler

In: IEEE Transactions on Visualization and Computer Graphics ( Volume: 25, Issue: 11, Nov. 2019)

We explore motion parameters, more specifically gait parameters, as an objective indicator to assess simulator sickness in Virtual Reality (VR). We discuss the potential relationships between simulator sickness, immersion, and presence. We used two different camera pose (position and orientation) estimation methods for the evaluation of motion tasks in a large-scale VR environment: a simple model and an optimized model that allows for a more accurate and natural mapping of human senses. Participants performed multiple motion tasks (walking, balancing, running) in three conditions: a physical reality baseline condition, a VR condition with the simple model, and a VR condition with the optimized model. We compared these conditions with regard to the resulting sickness and gait, as well as the perceived presence in the VR conditions. The subjective measures confirmed that the optimized pose estimation model reduces simulator sickness and increases the perceived presence. The results further show that both models affect the gait parameters and simulator sickness, which is why we further investigated a classification approach that deals with non-linear correlation dependencies between gait parameters and simulator sickness. We argue that our approach could be used to assess and predict simulator sickness based on human gait parameters and we provide implications for future research.

 

A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU

Tobias Feigl, Sebastian Kram, Philipp Woller, Ramiz H. Siddiqui, Michael Philippsen, Christopher Mutschler

In: 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

The main challenge in estimating human velocity from noisy Inertial Measurement Units (IMUs) are the errors that accumulate by integrating noisy accelerometer signals over a long time. Known approaches that work on step length estimation are optimized for a specific application, sensor position, and movement type, require an exhaustive (manual) parameter tuning, and can thus not be applied to other movement types or to a broader range of applications. Moreover, varying dynamics (as they are present for instance in sports applications) cause abrupt and unpredictable changes in step frequency or step length and hence result in erroneous velocity estimates. We use machine learning (ML) and deep learning (DL) to estimate a human's velocity. Our approach is robust to varying motion states and orientation changes in dynamic situations. On data from a single un-calibrated IMU, our novel recurrent model not only outperforms the state-of-the-art on instantaneous velocity (≤0.10 m/s) and on traveled distance (≤29 m/km). It can also generalize to different and varying rates of motion and provides accurate and precise velocity estimates.

A Bidirectional LSTM for Estimating Dynamic Human Velocities from a Single IMU

Felix Ott, Tobias Feigl, Christoffer Löffler, Christopher Mutschler

In: In Computer Vision Foundation (CVF) (Eds.), Joint Workshop on Long-Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM (pp. 42-43). Seattle, Washington, US.

Visual Odometry (VO) accumulates a positional drift in long-term robot navigation tasks. Although Convolutional Neural Networks (CNNs) improve VO in various aspects, VO still suffers from moving obstacles, discontinuous observation of features, and poor textures or visual information. While recent approaches estimate a 6DoF pose either directly from (a series of) images or by merging depth maps with optical flow (OF), research that combines absolute pose regression with OF is limited. We propose ViPR, a novel modular architecture for long-term 6DoF VO that leverages temporal information and synergies between absolute pose estimates (from PoseNet-like modules) and relative pose estimates (from FlowNet-based modules) by combining both through recurrent layers. Experiments on known datasets and on our own Industry dataset show that our modular design outperforms state of the art in long-term navigation tasks.

 

 

Deep Reinforcement Learning for Motion Planning of Mobile Robots

Leonid Butyrev, Thorsten Edelhäußer, Christopher Mutschler

In: Cornell University  

This paper presents a novel motion and trajectory planning algorithm for nonholonomic mobile robots that uses recent advances in deep reinforcement learning. Starting from a random initial state, i.e., position, velocity and orientation, the robot reaches an arbitrary target state while taking both kinematic and dynamic constraints into account. Our deep reinforcement learning agent not only processes a continuous state space it also executes continuous actions, i.e., the acceleration of wheels and the adaptation of the steering angle. We evaluate our motion and trajectory planning on a mobile robot with a differential drive in a simulation environment.

 

UWB Channel Impulse Responses for Positioning in Complex Environments: A Detailed Feature Analysis

Sebastian Kram, Maximilian Stahlke, Tobias Feigl, Jochen Seitz, Jörn Thielecke

In: Sensors (Basel). 2019 Dec; 19(24): 5547., doi: 10.3390/s19245547

Radio signal-based positioning in environments with complex propagation paths is a challenging task for classical positioning methods. For example, in a typical industrial environment, objects such as machines and workpieces cause reflections, diffractions, and absorptions, which are not taken into account by classical lateration methods and may lead to erroneous positions. Only a few data-driven methods developed in recent years can deal with these irregularities in the propagation paths or use them as additional information for positioning. These methods exploit the channel impulse responses (CIR) that are detected by ultra-wideband radio systems for positioning. These CIRs embed the signal properties of the underlying propagation paths that represent the environment. This article describes a feature-based localization approach that exploits machine-learning to derive characteristic information of the CIR signal for positioning. The approach is complete without highly time-synchronized receiver or arrival times. Various features were investigated based on signal propagation models for complex environments. These features were then assessed qualitatively based on their spatial relationship to objects and their contribution to a more accurate position estimation. Three datasets collected in environments of varying degrees of complexity were analyzed. The evaluation of the experiments showed that a clear relationship between the features and the environment indicates that features in complex propagation environments improve positional accuracy. A quantitative assessment of the features was made based on a hierarchical classification of stratified regions within the environment. Classification accuracies of over 90% could be achieved for region sizes of about 0.1 m2. An application-driven evaluation was made to distinguish between different screwing processes on a car door based on CIR measures. While in a static environment, even with a single infrastructure tag, nearly error-free classification could be achieved, the accuracy of changes in the environment decreases rapidly. To adapt to changes in the environment, the models were retrained with a small amount of CIR data. This increased performance considerably. The proposed approach results in highly accurate classification, even with a reduced infrastructure of one or two tags, and is easily adaptable to new environments. In addition, the approach does not require calibration or synchronization of the positioning system or the installation of a reference system.

 

Indoor Positioning Using OFDM-Based Visible Light Communication System

Birendra Ghimire, Jochen Seitz, Christopher Mutschler

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

Precise indoor positioning is essential to support emerging applications of location aware mobile computing. Current positioning techniques that use signals transmitted in the Gigahertz region of radio-frequency spectrum do not provide highly accurate position estimates due to multipath propagation of signals. This paper proposes a novel positioning system which uses the entities of visible light communication (VLC) system as anchors and tags. Our technique scales with the number of tags and the VLC network provides both positioning and communication capabilities. The anchor points transmit the OFDM-based VLC signals synchronously, and we estimate the time differences of arrival between anchor points and tags using positioning reference signals embedded into the air-interface of the VLC system. Simulation results show that positioning accuracy of 10 cm or better is possible for over 95% of users if the sampling clock offset is better that 10 ppm, clock jitter is below 1 ps, and a bit resolution of at least 16 bits is available.

 

Evaluation Criteria for Inside-Out Indoor Positioning Systems Based on Machine

Christoffer Löffler, Sascha Riechel, Janina Fischer, Christopher Mutschler

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

Real-time tracking allows to trace goods and enables the optimization of logistics processes in many application areas. Camera-based inside-out tracking that uses an infrastructure of fixed and known markers is costly as the markers need to be installed and maintained in the environment. Instead, systems that use natural markers suffer from changes in the physical environment. Recently a number of approaches based on machine learning (ML) aim to address such issues. This paper proposes evaluation criteria that consider algorithmic properties of ML-based positioning schemes and introduces a dataset from an indoor warehouse scenario to evaluate for them. Our dataset consists of images labeled with millimeter precise positions that allows for a better development and performance evaluation of learning algorithms. This allows an evaluation of machine learning algorithms for monocular optical positioning in a realistic indoor position application for the first time. We also show the feasibility of ML-based positioning schemes for an industrial deployment.

 

Convolutional Neural Networks for Position Estimation in TDoA-Based Locating Systems

Arne Niitsoo, Thorsten Edelhäuβer, Christopher Mutschler

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

Object localization and tracking is essential for many applications including logistics and industry. Many local Time-of-Flight (ToF)-based locating systems use synchronized antennas to receive radio signals emitted by mobile tags. They detect the Time-of-Arrival (TOA) of the signal at each antenna and trilaterate the position from the Time Difference-of-Arrival (TDoA) between antennas. However, in multipath scenarios it is difficult to extract the correct ToA. This causes wrong positions. This paper proposes a signal processing method that uses deep learning to estimate the absolute tag position directly from the raw channel impulse response (CIR) data. We use the CIR together with ground truth positional data to train a convolutional neural network (CNN) that not only estimates non-linearities in the signal propagation space but also analyzes the signal for multipath effects. Our evaluation shows that our position estimation works in multipath environments and also outperforms classical signal processing in line-of-sight situations.

 

 

Super-Resolution in RSS-Based Direction-of-Arrival Estimation

Thorsten Nowak, Markus Hartmann, Jörn Thielecke, Niels Hadaschik, Christopher Mutschler

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

For the evolving Internet-of-Things ubiquitous positioning is a core feature. Hence, energy- and location-awareness are essential properties of wireless sensor networks (WSNs). In terms low power consumption received signal strength (RSS)-based localization techniques outperform timing-based localization approaches. Therefore, RSS-based direction finding is prospective approach to location-aware, low-power sensor nodes. However, RSS-based direction-of-arrival (DOA) estimation is prone to multipath propagation. In this paper, a subspace-based approach to frequency-domain multipath resolution is presented. Resolution of multipath components allows for a RSS-based DOA estimation considering the power of the line of sight (LOS) component only. The impact of the multipath channel is considerably reduced with our approach. In contrast to common broadband DOA estimation techniques, the presented approach does not need phase-coherent receive channels or a synchronized sensor network. Hence, the proposed super-resolution technique is applicable to low-power sensor networks and brings accurate positioning to small-sized and energy-efficient sensor nodes.

 

Supervised Learning for Yaw Orientation Estimation

Tobias Feigl, Christopher Mutschler, Michael Philippsen

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

With free movement and multi-user capabilities, there is demand to open up Virtual Reality (VR) for large spaces. However, the cost of accurate camera-based tracking grows with the size of the space and the number of users. No-pose (NP) tracking is cheaper, but so far it cannot accurately and stably estimate the yaw orientation of the user's head in the long-run. Our novel yaw orientation estimation combines a single inertial sensor located at the human's head with inaccurate positional tracking. We exploit that humans tend to walk in their viewing direction and that they also tolerate some orientation drift. We classify head and body motion and estimate heading drift to enable low-cost long-time stable head orientation in NP tracking on 100 m×100 m. Our evaluation shows that we estimate heading reasonably well.

 

Recurrent Neural Networks on Drifting Time-of-Flight Measurements

Tobias Feigl, Thorsten Nowak, Michael Philippsen, Thorsten Edelhäuβer, Christopher Mutschler

In: 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

Kalman filters (KFs) are popular methods to estimate position information from a set of time-of-flight (ToF) values in radio frequency (RF)-based locating systems. Such filters are proven to be optimal under zero-mean Gaussian error distributions. In presence of multipath propagation ToF measurement errors drift due to small-scale motion. This results in changing phases of the multipath components (MPCs) which cause a drift on the ToF measurements. Thus, on a short-term scale the ToF measurements have a non-constant bias that changes while moving. KFs cannot distinguish between the drifting measurement errors and the true motion of the tracked object. Hence, very rigid motion models have to be used for the KF which commonly causes the filters to diverge. Therefore, the KF cannot resolve the short-term errors of consecutive measurements and the long-term motion of the tracked object. This paper presents a data-driven approach that uses training sequences to derive a near-optimal position estimator. A Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) learns to interpret drifting errors in ToF measurements of a tracked dynamic object directly from raw ToF data. Our evaluation shows that our approach outperforms state-of-the-art KFs on both synthetically generated and real-world dynamic motion trajectories that include drifting ToF measurement errors.

 

Human Compensation Strategies for Orientation Drifts

Tobias Feigl, Christopher Mutschler, Michael Philippsen

In: 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018), Reutlingen, Germany, 2018

No-Pose (NP) tracking systems rely on a single sensor located at the user's head to determine the position of the head. They estimate the head orientation with inertial sensors and analyze the body motion to compensate their drift. However with orientation drift, VR users implicitly lean their heads and bodies sidewards. Hence, to determine the sensor drift and to explicitly adjust the orientation of the VR display there is a need to understand and consider both the user's head and body orientations. This paper studies the effects of head orientation drift around the yaw axis on the user's absolute head and body orientations when walking naturally in the VR. We study how much drift accumulates over time, how a user experiences and tolerates it, and how a user applies strategies to compensate for larger drifts.

 

Head-to-Body-Pose Classification in No-Pose VR Tracking Systems

Tobias Feigl, Christopher Mutschler, Michael Philippsen

In: 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018)Reutlingen, Germany, 2018

Pose tracking does not yet reliably work in large-scale interactive multi-user VR. Our novel head orientation estimation combines a single inertial sensor located at the user's head with inaccurate positional tracking. We exploit that users tend to walk in their viewing direction and classify head and body motion to estimate heading drift. This enables low-cost long-time stable head orientation. We evaluate our method and show that we sustain immersion.

 

Beyond Replication: Augmenting Social Behaviors in Multi-User Social Virtual Realities

Daniel Roth, Constantin Klelnbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik

In: 25th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2018)Reutlingen, Germany, 2018

This paper presents a novel approach for the augmentation of social behaviors in virtual reality (VR). We designed three visual transformations for behavioral phenomena crucial to everyday social interactions: eye contact, joint attention, and grouping. To evaluate the approach, we let users interact socially in a virtual museum using a large-scale multi-user tracking environment. Using a between-subject design (N = 125) we formed groups of five participants. Participants were represented as simplified avatars and experienced the virtual museum simultaneously, either with or without the augmentations. Our results indicate that our approach can significantly increase social presence in multi-user environments and that the augmented experience appears more thought-provoking. Furthermore, the augmentations seem also to affect the actual behavior of participants with regard to more eye contact and more focus on avatars/objects in the scene. We interpret these findings as first indicators for the potential of social augmentations to impact social perception and behavior in VR.

 

A Location-Based VR Museum

Jean-Luc Lugrin, Florian Kern, Ruben Schmidt, Constantin Kleinbeck, Daniel Roth, Christian Daxer, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik

In: 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)

This poster presents a novel type of Virtual Reality (VR) application for education and culture: a location-based VR Museum, which is a large-room scale multi-user multi-zone virtual museum. This VR museum was designed to support over 100 simultaneous users, walking in a large tracking system (600 m2) and sharing a ten times bigger virtual space (7000 m 2 ) containing indoor and outdoor dinosaur exhibitions. This poster is giving an overview of the system and its main features as well as discussing its potential benefits and future evaluation.

 

Optical Camera Communication for Active Marker Identification in Camera-based Positioning Systems

Lorenz Gorse, Christoffer Löffler, Christopher Mutschler, Michael Philippsen

In: 2018 15th Workshop on Positioning, Navigation and Communications (WPNC)

Outside-in camera-based localization systems determine the position of mobile objects (with markers attached to them) by observing a tracking area with multiple camera anchors. Up to now the identification of the objects in the camera images only works for close objects (with passive 3-dimensional marker constellations) or when tracking gaps or a slow identification are acceptable (with an active LED that blinks a unique identification code).We present a novel Optical Camera Communication method whose LEDs are never switched off completely to allow continuous tracking. We encode bits with variations of light intensity and show how an outside-in optical localization system can reliably identify the markers within a few frames.The identification works with a bit error ratio of 7.5 10 -5 . It works for moving markers (with speeds of at least up to 4 m/s), for distant markers (with distances of at least up to 44 m), for multiple markers, and under moderate ambient infrared light.

Social Augmentations in Multi-User Virtual Reality: A Virtual Museum Experience

Daniel Roth, Constantin Kleinbeck, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik

In: IEEE (Hrsg.):  Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Nantes, France)., 2017, S. 42-43. - ISBN 978-1-5386-1454-9

This work in progress report demonstrates a novel approach for behavioral augmentations in Virtual Reality (VR). Using a large scale tracking system, groups of five users explored a virtual museum. We investigated how augmenting social interactions impacts this experience, by designing behavioral transformations for behavioral phenomena in social interactions. Preliminary data indicate a reduction of perceived isolation, and a more thought-provoking experience with active behavioral augmentations.

 

Acoustical manipulation for redirected walking

Tobias Feigl, Eliise Kõre, Christopher Mutschler, Michael Philippsen

In: ACM (Hrsg.):  Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17) (23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17), Gothenburg, Sweden). New York: ACM, 2017, S. 45:1-45:2. - ISBN 978-1-4503-5548-3

Redirected Walking (RDW) manipulates a scene that is displayed to VR users so that they unknowingly compensate for scene motion and can thus explore a large virtual world on a limited space. So far, mostly visual manipulation techniques have been studied. This paper shows that users can also be manipulated by means of acoustical signals. In an experiment with a dynamically moving audio source we see deviations of up to 30% from a 20 m long straight-line walk for male participants and of up to 25% for females. Static audio has about two thirds of this impact.

Virtual and Augmented Reality in Sports: An Overview and Acceptance Study

Stefan Gradl, Bjoern M. Eskofier, Dominic Eskofier, Christopher Mutschler, Stephan Otto

In: (ACM (Hrsg.): Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’16): Adjunct (2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany). 2016, S. 885-888. ISBN 978-1-4503-4462-3.

The interest in virtual and augmented reality exploded during the last two years. We propose the use of these systems in the field of sports by combining this technology with a local area radio-based localization technology. This allows for novel application scenarios using virtual environments in team-based sports, which are outlined in this work. We conducted an online survey among 227 athletes about the acceptance of virtual reality headsets for training in different kind of sport disciplines.

 

Inter-satellite ranging in the Low Earth Orbit

Mohammad Alawieh, Niels Hadaschik, Norbert Franke, Christopher Mutschler

In: 2016 10th International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP)

Many satellite systems require knowledge about inter-satellite distances. Inter-satellite links provide direct connectivity between satellites and may be used for ranging, while they remove the need of dedicated hardware. This paper addresses inter-satellite ranging at the Low Earth Orbit (LEO) using S-Band signals. We take the requirements from recent missions and future applications into account and analyze the factors that limit ranging. We show how these factors affect the quality of distance estimation in terms of the Cramér-Rao Lower Bound for ranging. Two-way-ranging (TWR) provides the best accuracy and sustains the low-cost objective of small satellites. We further propose an enhanced TWR message exchange that enables on-the-fly delay calibration and sub-sample corrections. We implemented the transceiver modules on a Software-Defined Radio (SDR) platform and evaluated them with real-world data. The results show that the proposed ranging algorithm achieves an accuracy of few centimeters.

 

Low-complexity PDoA-based localization

Benjamin Sackenreuter, Niels Hadaschik, Marc Faßbinder, Christopher Mutschler

In: 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN)

Localization of wireless nodes within the IoT received much attention lately. However, strong constraints on power consumption, scalability, and complexity of the nodes pose a big challenge for localization techniques. This paper presents a concept for energy-efficient low-complexity localization based on Phase Difference of Arrival (PDoA). Besides a novel method for reference transmitter selection we propose a waveform, well-suited for PDoA measurements, and evaluate its ranging performance. We compare multiple signal classification (MUSIC), linear fitting, and mean phase difference and compare their estimation variance to the Cramer Rao Lower Bound (CRLB). Our system concept allows for the mitigation of near-far effects for reference and tag signals at the receiver nodes, and an efficient implementation of a wideband frequency hopping scheme.

Approximative Event Processing on Sensor Data Streams (Best Poster and Demostration Award)

Christoffer Löffler, Christopher Mutschler, Michael Philippsen

In: ACM (Hrsg.): Proceedings of the 9th International Conference on Distributed Event-Based Systems (9th ACM International Conference on Distributed Event-Based Systems, Oslo, Norway). 2015. S. 360—363. – ISBN 978-1-4503-3286-6.

Event-Based Systems (EBS) can efficiently analyze large streams of sensor data in near-realtime. But they struggle with noise or incompleteness that is seen in the unprecedented amount of data generated by the Internet of Things.

We present a generic approach that deals with uncertain data in the middleware layer of distributed event-based systems and is hence transparent for developers. Our approach calculates alternative paths to improve the overall result of the data analysis. It dynamically generates, updates, and evaluates Bayesian Networks based on probability measures and rules defined by developers. An evaluation on position data shows that the improved detection rate justifies the computational overhead.

Adaptive Speculative Processing of Out-of-Order Event Streams

Christopher Mutschler, Michael Philippsen

In: ACM Transactions on Internet Technology (TOIT) 14 (2014), Nr. 1, S. 4:1—4:24, ISSN 1557-6051.

Distributed event-based systems are used to detect meaningful events with low latency in high data-rate event streams that occur in surveillance, sports, finances, etc. However, both known approaches to dealing with the predominant out-of-order event arrival at the distributed detectors have their shortcomings: buffering approaches introduce latencies for event ordering, and stream revision approaches may result in system overloads due to unbounded retraction cascades.

This article presents an adaptive speculative processing technique for out-of-order event streams that enhances typical buffering approaches. In contrast to other stream revision approaches developed so far, our novel technique encapsulates the event detector, uses the buffering technique to delay events but also speculatively processes a portion of it, and adapts the degree of speculation at runtime to fit the available system resources so that detection latency becomes minimal.

Our technique outperforms known approaches on both synthetical data and real sensor data from a realtime locating system (RTLS) with several thousands of out-of-order sensor events per second. Speculative buffering exploits system resources and reduces latency by 40% on average.

 

Latency Minimization of Order-Preserving Distributed Event-Based Systems

Christopher Mutschler

In: Dissertation. Dr. Hut Verlag. 229 Seiten. ISBN 978-3-8439-1472-7.

Nowadays sensors are increasingly deployed in many kinds of applicationsand deliver streams of data that they continuously collect. This data is signif-icant as it provides important information in real time. However, without afast processing of this information the sensors only provide a pointless streamof data. Hence, there is a need for automatic processing to extract meaningfulinformation within an acceptable amount of time. This thesis describes tech-niques that allow a fast analysis of streaming data.Real-time Locating Systems (RTLSs) or Radio Frequency Identification(RFID) systems provide several thousand position events per second. Event-based systems (EBSs) meet the high performance requirements and are apowerful technique for a reactive analysis of such data streams. Detectionalgorithms are divided up into several comparatively small event detectors(EDs), become inherently scalable through distribution, and are easy to main-tain because of their reduced software complexity. Such event detectors com-municate by messages, i.e., events, over an event processing middleware andare hierarchically linked to detect the final events of interest.Since partial results, i.e., events, are generated at different points in thesystem they are no longer timely synchronized. However, the algorithms im-plemented in the event detectors assume a timely ordered event stream as theytry to detect interaction patterns. It is never a viable solution to process eventsout of order. This puts a significant workload to the underlying middleware.A-priori estimations of reordering parameters cannot include runtime infor-mation about object and system behavior, and thus the event loads, and musthence be set too conservatively in order to avoid system failures caused byordering mistakes. But this often results in high detection latencies.This thesis describes how to optimally adapt to variations in the observedenvironment to minimize detection delays at runtime. We show how out-of-order events are transparently reordered with low latency at each node sothat event detectors may process them in a correct order. A speculative pro-cessing exploits unused system resources and reduces detection latency to a minimum. We further present a technique to migrate event detectors betweennodes at runtime and show how to optimize their detection latency introducedby networking delays in a distributed system environment. Our system is scal-able as the number of trackable objects and measurement sensors grows.The methods presented in this thesis compare very well with methods pro-posed so far. We show that we reduce latency of distributed event-based sys-tems adaptively by integrating available system resources dynamically to fitthe performance requirements at runtime for a continuously changing envi-ronment. At the same time the semantics of event detector implementationsremain untouched. Our system deals with any type of delays, does not needto be parameterized a-priori, and is fully scalable. The author is not aware ofany published method performing significantly better.

 

 

Predictive load management in smart grid environments

Christopher Mutschler, Christoffer Löffler, Nicolas Witt, Thorsten Edelhäußer, Michael Philippsen

In: ACM (Hrsg.): Proceedings of the 8th ACM International Conference on Distributed Event-Based Systems (8th ACM International Conference on Distributed Event-Based Systems, Mumbai, India). 2014. S. 282—287. – ISBN 978-1-2734-4.


The DEBS 2014 Grand Challenge targets the monitoring and prediction of energy loads of smart plugs installed in private households. This paper presents details of our middleware solution and efficient median calculation, shows how we address data quality issues, and provides insights into our enhanced prediction based on hidden Markov models.

The evaluation on the smart grid data set shows that we process up to 244k input events per second with an average detection latency of only 13.3ms, and that our system efficiently scales across nodes to increase throughput. Our prediction model significantly outperforms the median-based prediction as it deviates much less from the real load values, and as it consumes considerably less memory.

DEBS 2013 Grand Challenge: Soccer monitoring

Christpher Mutschler, Holger Ziekow, Zbigniew Jerzak

In: ACM (Hrsg.): Proceedings of the 7th ACM International Conference on Distributed Event-Based Systems (7th ACM International Conference on Distributed Event-Based Systems, Arlington, Texas, USA)., 2013, S. 289-294. - ISBN 978-1-4503-1758-0.

The ACM DEBS 2013 Grand Challenge is the third in a series of challenges which seek to provide a common ground and evaluation criteria for a competition aimed at both research and industrial event-based systems. The goal of the Grand Challenge competition is to implement a solution to a real-world problem provided by the Grand Challenge organizers. The 2013 edition of the Grand Challenge focuses on real-time, event-based sports analytics. The 2013 Grand Challenge data set was collected during a football match carried out at a Nuremberg Stadium in Germany and is complemented with a set of continuous analytical queries which provide detailed insight into the match statistics for both team managers and spectators.

 

Demo: do event-based systems have a passion for sports?

Christpher Mutschler, Nicolas Witt, Michael Philippsen

In: ACM (Hrsg.): Proceedings of the 7th ACM International Conference on Distributed Event-Based Systems (7th ACM International Conference on Distributed Event-Based Systems, Arlington, Texas, USA)., 2013, S. 331-332. - ISBN 978-1-4503-1758-0.

The ubiquity of sensor data calls for automatic processing to extract valuable information. Realtime Locating Systems (RTLS) provide many parallel position data streams for interacting objects, and event-based systems are the method of choice to analyze them. We demonstrate a distributed event processing system for position stream data from a Realtime Locating System used for a soccer application. Our system can deal with the insufficient knowledge on object and system behavior, and thus the event data loads at runtime. To do so, it dynamically adapts to the variations in the observed environment: events are ordered with respect to their delays, event detectors are reconfigured and migrated between nodes at runtime, and the system is scalable as the number of trackable objects and sensors changes. We demonstrate the efficiency of our system architecture and provide tools to visualize data and to configure detection units at runtime.

 

Reliable speculative processing of out-of-order event streams in generic publish/subscribe middlewares

Christopher Mutschler, Michael Philippsen

In: ACM (Hrsg.): Proceedings of the 7th ACM International Conference on Distributed Event-Based Systems (7th ACM International Conference on Distributed Event-Based Systems, Arlington, Texas, USA)., 2013, S. 147-158. - ISBN 978-1-4503-1758-0.

In surveillance, sports, finances, etc., distributed event-based systems are used to detect meaningful events with low latency in high data rate event streams. Both known approaches to deal with the predominant out-of-order event arrival at the distributed detectors have their shortcomings: buffering approaches introduce latencies for event ordering and stream revision approaches may result in system overloads due to unbounded retraction cascades. This paper presents a speculative processing technique for out-of-order event streams that enhances typical buffering approaches. In contrast to other stream revision approaches our novel technique encapsulates the event detector, uses the buffering technique to delay events but also speculatively processes a portion of it, and adapts the degree of speculation at runtime to fit the available system resources so that detection latency becomes minimal.

Our technique outperforms known approaches on both synthetical data and real sensor data from a Realtime Locating System (RTLS) with several thousands of out-of-order sensor events per second. Speculative buffering exploits system resources and reduces latency by 40% on average.

 

Evolutionary Algorithms that use Runtime Migration of Detector Processes to Reduce Latency in Event-Based Systems

Christoffer Löffler, Christopher Mutschler, Michael Philippsen

In: IEEE Computer Society (Hrsg.): Proceedings of the 2013 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2013) (2013 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2013), Torino, Italy)., 2013, S. 31-38. - ISBN 978-1-4673-6381-5.

Event-based systems (EBS) are widely used to efficiently process massively parallel data streams. In distributed event processing the allocation of event detectors to machines is crucial for both the latency and efficiency, and a naive allocation may even cause a system failure. But since data streams, network traffic, and event loads cannot be predicted sufficiently well the optimal detector allocation cannot be found a-priori and must instead be determined at runtime. This paper describes how evolutionary algorithms (EA) can be used to minimize both network and processing latency by means of runtime migration of event detectors. The paper qualitatively evaluates the algorithms on synthetical data streams in a distributed event-based system. We show that some EAs work efficiently even with large numbers of event detectors and machines and that a hybrid of Cuckoo Search and Particle Swarm Optimization outperforms others.

 

Dynamic Low-Latency Distributed Event Processing of Sensor Data Streams

Christopher Mutschler, Michael Philippsen

In: GI (Hrsg.): Proceedings of the 25th Workshop on Parallel Systems and Algorithms (PARS 2013) (25th Workshop on Parallel Systems and Algorithms (PARS 2013), Erlangen, Germany)., 2013

Event-based systems (EBS) are used to detect meaningful events with low latency in surveillance sports finances etc. Howeverwith rising data and event rates and with correlations among these events processing can no longer be sequential but it needs to be distributed. However naively distributing existing approaches not only cause failures as their order-less processing of events cannot deal with the ubiquity of out-of-order event arrival. It is also hard to achieve a minimal detection latency. This paper illustrates the combination of our building blocks towards a scalable pub- lish/subscribe-based EBS that analyzes high data rate sensor streams with low latency: a parameter calibration to put out-of-order events in order without a-priori knowledge on event delaysa runtime migration of event detectors across system resourcesand an online optimization algorithm that uses migration to improve performance. We evaluate our EBS and its building blocks on position data streams from a Realtime Locating System in a sports application.

 

Runtime Migration of Stateful Event Detectors with Low-Latency Ordering Constraints

Christopher Mutschler, Michael Philippsen

In: IEEE (Hrsg.): Proceedings of the 2013 IEEE International Conference on Pervasive Computing and Communications Workshops (9th International Workshop on Sensor Networks and Systems for Pervasive Computing, San Diego, CA, USA)., 2013, S. 609-614. - ISBN 978-1-4673-5076-1.

Runtime migration has been widely adopted to achieve several tasks such as load balancing, performance optimization, and fault-tolerance. However, existing migration techniques do not work for event detectors in distributed publish/subscribe systems that are used to analyze sensor data. Since low-latency time-constraints are no longer valid they reorder streams incorrectly and cause erroneous event detector states. This paper presents a safe runtime migration of stateful event detectors that respects low-latency time-constraints and seamlessly orders input events correctly on the migrated host. Event streams are only forwarded until timing delays are properly calibrated, the migrated event detector immediately stops processing after its state is transferred, and the processing overhead is negligible. On a Realtime Locating System (RTLS) we show that we can efficiently migrate event detectors at runtime between servers where other techniques would fail.

 

Distributed Low-Latency Out-of-Order Event Processing for High Data Rate Sensor Streams

Christopher Mutschler, Michael Philippsen

In: IEEE Computer Society (Hrsg.): Proceedings of 27th International Parallel and Distributed Processing Symposium (27th IEEE International Parallel & Distributed Processing Symposium (IPDPS), Boston, Massachusetts, USA)., 2013, S. 1133-1144. - ISBN 978-0-7695-4971-2.

Event-based Systems (EBS) are used to detect and analyze meaningful events in surveillance, sports, finances and many other areas. With rising data and event rates and with correlations among these events, sequential event processing becomes infeasible and needs to be distributed. Existing approaches cannot deal with the ubiquity of out-of-order event arrival that is introduced by network delays when distributing EBS. Order-less event processing may result in a system failure. We present a low-latency approach based on K-slack that achieves ordered event processing on high data rate sensor and event streams without a-priori knowledge. Slack buffers are dynamically adjusted to fit the disorder in the streams without using local or global clocks. The middleware transparently reorders the event input streams so that events can still be aggregated and processed to a granularity that satisfies the demands of the application. On a Realtime Locating System (RTLS) our system performs accurate low-latency event detection under the predominance of out-of-order event arrival and with a close to linear performance scale-up when the system is distributed over several threads and machines.

Learning Event Detection Rules with Noise Hidden Markov Models

Christopher Mutschler, Michael Philippsen

In: Benkrid, K.; Merodio, D. (Hrsg.): Proceedings of the 2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2012) (2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS-2012), Nuremberg, Germany)., 2012, S. 159-166. - ISBN 978-1-4673-1914-0.

Complex Event Processing (CEP) is a popular method to monitor processes in several contexts, especially when dealing with incidents at distinct points in time. Specific temporal combinations of various events are often of special interest for automatic detection. For the description of such patterns, one can either implement rules in some higher programming language or use some Event Description Language (EDL). Both is complicated and error-prone for non-engineers, because it varies greatly from natural language. Therefore, we present a method, by which a domain expert can simply signal the occurrence of a significant incident at a specific point in time. The system then infers rules for automatically detecting such occurrences later on. At the core of our approach is an extension of hidden Markov models (HMM) called noise hidden Markov models (nHMM) that can be trained with existing, low-level event data. The nHMM can be applied online without any intervention of programming experts. An evaluation on both synthetic and real event data shows the efficiency of our approach even under the presence of highly frequent, insignificant events and uncertainty in the data.