All posts by Christian Müller

iMotion

Located on our university campus, iMotion Germany GmbH is an advanced engineering think tank for iMotion in Suzhou, China. The 1st Tier supplier provides hardware and software for ADAS and autonomous driving. ASR collaborates with iMotion by supervising PhD candidates who are on the companies payroll.

OpenDS



OpenDS is one of the most popular open-source driving simulators world-wide. Originally developed to facilitate studies in the driver distraction domain, OpenDS became a tool for investigating autonomous driving technology. The latest OpenDS release is OpenDS 5.0 published in 2018. http://opends.dfki.de

Autonomous Driving

Autonomous driving (highly / fully automated driving) is one out of four applications domains we develop AI technology for. With regards to environment perception, we apply our Digital Reality principle i.e. we develop AI that is capable of generating synthetic data, focussing on human behavior in urban traffic situations. For trajectory planning, we apply our hybrid learning technology leading to more robust and trustworthy systems. Moreover, our work on the AI platform will be applied in the autonomous driving domain.

Autonomous Driving

 

The team Autonomous Driving (AD) team conducts research on AI-based environment perception and trajectory planning for autonomous vehicles (Level 4 >).

We consider both subsymbolic AI-techniques used in learning systems (machine learning, deep learning) as well as symbolic techniques such as reasoning or constraint-based methods.

With respect to learning systems, we work with both real and synthetic data in an idealized implementation of the Digital Reality Principle, which is the thematic guideline of the research area ASR.

Continue reading Autonomous Driving

– REACT

The overall goal of REACT is a systematic, safe and validatable approach to developing, training and use of digital reality with the goal to ensure  safe and reliable acting autonomous systems – especially in critical situations. In order to reach this goal, we use methods and concepts of machine learning – especially
Deep Learning and (Deep) Reinforcement Learning (RL) – to learn lower-dimensional submodels of the real world. From these submodels we compile (semi) automatically complex, high-dimensional models in order to identify and simulate the entire range of critical situations. By means of  digital reality, we virtually synthesize  the otherwise missing sensor data of critical situations and train autonomous systems so that they are able to handle critical situations safe and confident.  The aim of the project is to enhance the capabilities of autonomous systems. Therefore we continuously and  systematically validate and  align  synthetic data with reality and adapt the models where necessary.

Continue reading – REACT

Fastlane

Nowadays Web applications, including 3D virtual worlds, real-time simulations, and virtual reality applications demand a high amount of processing power. JavaScript, yet, is inherently single threaded and computationally intensive tasks need to avoid taking exclusive control over this thread for a prolonged time to not stall the entire Web page.

The JavaScript APIs that enable hardware-supported parallelism, such as SIMD.js and Web Worker, however, are by design low-level APIs and subject to hardware specific limitation, unfamiliar programming idioms, and performance portability issues if they are not available on every platform.

Fastlane solves these problems by combining the results of two successful projects, Xflow and shade.js, to provide a compiler-driven adaptive data-flow-programming framework for parallel data processing on the web.

It utilizes data-flow programming, a proven and well-know programming idiom for data processing, to define a series of data-transformations, each written in a valid subset of JavaScript. The provided data-flow graph is then analyzed and compiled into optimized JavaScript or GLSL shader code, considering all APIs available on the current system. This alleviates the necessity for the developer to define multiple versions of the same computation for different combination of available APIs and platform features.