Category Archives: Projects

FI-NEXT – Bringing FIWARE to the NEXT Step

Das Projekt FI-NEXT setzt eine Reihe ASR-Forschungsaktivitäten im Rahmen von FIWARE fort. FI-NEXT konzentriert sich nun auf die Vereinheitlichung von Schnittstellen und Datenmodellen, sowie die Optimierung der Kommunikation zwischen den verschiedenen Generic-Enabler, um die Ergebnisse aus FI-WARE und FI-CORE zu einer einheitlichen Open-Source-Infrastruktur zu erheben. In FI-NEXT wird sich der Fachbereich ASR auf Linked-Data als aufstrebenden Mechanismus für leistungsfähigere und flexiblere Schnittstellen konzentrieren. Dabei werden sowohl die Schnittstellen zu Diensten, als auch das für die Übertragung von Daten verwendete Modelle in einer standardisierten Sprache semantisch beschrieben. Das Ziel ist, durch die semantische Beschreibung von Diensten und Schnittstellen, das Design und die Bereitstellung von verteilten Anwendungen im Kontext von FIWARE weiter zu erleichtern. Die Arbeit in FI-NEXT steht in direktem Bezug zu Arbeiten im Advanced Web-based User Interface Chapter in FIWARE, z.B. durch die Weiterentwicklung des in FI-CORE entwickelten, neuartigen Synchronization-Generic-Enabler (FiVES) oder auch XML3D als 3D-User Interface GE. Auch Ergebnisse aus dem Projekt ARVIDA fließen in Form der oben genannten semantischen Dienstebeschreibungen unmittelbar in FI-NEXT mit ein. Ziel ist, die Entwicklung einer end-to-end Anwendungslösung zu vereinfachen, die es erlaubt, den Bogen von IoT-Sensoren bis hin zu interaktiven Visualisierungen in Apps zu spannen. Die Entwicklung soll dabei sowohl das Design der Anwendung, als auch das Deployment der benötigten Service-Infrastruktur beinhalten. Die Arbeitsgruppe um Prof. Slusallek will sich darüber hinaus, abhängig von Wahlergebnissen in der Open Source Community, weiter als Leiter und Architekt des WebUI Chapters, sowie als Co-Chair des Technical Steering Committee engagieren.

Ansprechpartner: Dipl.-Inf. René Schubotz
Homepage: https://forge.fiware.org/projects/fi-next/

– HP-DLF

HP-DLF: High Performance Deep Learning Framework

The goal of HP-DLF is to provide researchers and developers in the “deep learning” domain an easy access to current and future high-performance computing systems. For this purpose, a new software framework will be developed, which automates the highly complex parallel training of large neural networks on heterogeneous computing clusters. The focus is on scaling and energy efficiency, as well as high portability and user transparency. The goal is to scale the training of networks designed in existing frameworks, without additional user effort, over a three-digit number of compute nodes.

DLBB

DLBB: Deep Learning Building Blocks

DLBB is a project funded through the Cluster of Excellence on “Multimodal Computing and Interaction” (MMCI). The goal of the project is to research and define high-performance, cross-platform abstractions via meta-programming for deep learning frameworks. This will, in particular, include

  • how to describe the basic building blocks in a textbook-like style
  • how to combine and optimize a sequence of building blocks, and
  • how to run on different hardware (CPU, GPU, etc).

– ProThOS

ProThOS: Programmable Taskflow Oriented Operating System

ProThOS is a research project funded by the German Federal Ministry of Education and Research (BMBF) through a directive for funding for “basic research for HPC software in high-performance computing”. Parallelization in the exascale era is a major challenge not only from the perspective of a programming model but also to the execution environment: data dependencies are not recognized correctly, the execution overhead is too large, heterogeneity can not be used, etc. Efforts to address this issue in a smart intermediate layer fail due to the incurred overhead. ProThOS therefore brings programming and execution closer together and bases the data-flow-oriented programming language closely on the execution environment as well as the language constructs to the operating system. The language model remains C/C++ oriented and it will be shown that these principles can be mapped in an efficient way to heterogeneous infrastructures. By integration into the operating system, the execution overhead is drastically reduced. The DFKI researches and develops in ProThOS mainly the programming of such systems and investigates this on the basis of ray tracing and stencil pipelines.

Project website: https://manythreads.github.io/prothos

– Metacca

Metacca: Metaprogramming for Accelerators

Metacca is a research project funded by the German Federal Ministry of Education and Research (BMBF) through a directive for funding for “basic research for HPC software in high-performance computing”. The goal of Metacca is to extend the AnyDSL framework into a homogeneous programming environment for heterogeneous single- and multi-node systems. To this effect, the existing programming language and compiler will be extended by an expressive type system and language features enabling efficient programming of accelerators. Significant aspects of this extension concern the modeling of memory on heterogeneous devices, distribution of data to multiple compute nodes and improving the precision and power of the partial evaluation approach.

Within the project further support for distribution and synchronization for data-parallel programs will be built on top of these language enhancements as a library making use of AnyDSL’s partial evaluation features. Performance models and static analysis tools will be integrated into the AnyDSL tool chain to support development of applications and tuning of parameters. A runtime environment with built-in performance profiling will take care of resource management and system configuration. The resulting framework is evaluated using applications from bioinformatics and ray tracing. The target platforms are single heterogeneous nodes and clusters with several accelerators.

Project website: https://metacca.github.io

– REACT

The overall goal of REACT is a systematic, safe and validatable approach to developing, training and use of digital reality with the goal to ensure  safe and reliable acting autonomous systems – especially in critical situations. In order to reach this goal, we use methods and concepts of machine learning – especially
Deep Learning and (Deep) Reinforcement Learning (RL) – to learn lower-dimensional submodels of the real world. From these submodels we compile (semi) automatically complex, high-dimensional models in order to identify and simulate the entire range of critical situations. By means of  digital reality, we virtually synthesize  the otherwise missing sensor data of critical situations and train autonomous systems so that they are able to handle critical situations safe and confident.  The aim of the project is to enhance the capabilities of autonomous systems. Therefore we continuously and  systematically validate and  align  synthetic data with reality and adapt the models where necessary.

Continue reading – REACT

CIMPLEX

Results

In the EU project CIMPLEX the research department of Agents and Simulated Reality (ASR) created solutions to visualize large amounts of diverse data. This allows for the analysis in the field of epidemic spreading of diseases.
The data of the disease spreading comes from two sources: Either recorded data like travel information or simulated data.
This data can be visualized using different visualization techniques e.g. network graphs to allow for easy detection of relationships and disease spreading over time. All views of the data can be synchronized.
Moreover users can work with the data in a collaborative manner using different devices like classical desktop web applications or more modern devices like tablets or VR and AR headsets.
Through data flow programming and novel technologies a data-parallel as well as thread-parallel processing is made possible. Hardware computing and rendering increases performance.

Motivation

Epidemics are an international problem, more likely than ever before. Their course is complex and difficult to foresee. The past has shown how important a quick reaction for the effectiveness of countermeasures is.
One example from the project is the spreading of influenza. This disease usually occurs in winter time. The dry air and the use of air conditioning fostering its growth. International travel than increases it’s spread across boundaries.
It is essential to gain a comprehensive and up-to-date picture of the crisis situation from the outset, to analyze the situation and to communicate the necessary measures quickly and purposefully.

Goals

One aim of the project was to improve the communication with the affected persons while delivering an effective and purposeful disaster management at the same time.
New communication technologies, which are based on social networks and smartphones, can help to deliver and link the necessary information.
The tools and computer models that were developed were intended to inform decision-makers and citizens in real-time and support them in combating disease spread.
The technologies employed were threefold. First large scale, realistic, data-driven models to predict the disease spread. Second participatory data collection to gain information about disease outbreaks early on and last advanced methods for Big Data analysis and visualization.
In the European research project CIMPLEX (Bringing of Citizen, model and Data together in Participatory, Interactive Social Exploratories) such a new system was developed.
CIMPLEX exploratory concept
CIMPLEX exploratory concept

Challenges

CIMPLEX combined information from a variety of sources such as social networks, cell phone positions, the social and economic environment, and the experiences and opinions of eyewitnesses.
New models for explanation, visualization and interaction with data and models both on individual and on collective level were to be developed. Theoretical, methodological and technological advances were aimed at in order to better foresee, explain and handle disease spreading.
All this were to be molded into a broadly usable ICT platform.

Solution

Objectives

The proposed solution should be usable by a wide range of users. From policy makers trying to curb disease spread, researchers developing news models for disease spread prediction up to citizens.
The visualization should be able to yield different views on the underlying data and models and allowing for a collaborative analysis.
The visualization must be able to handle vast amounts of geo-referenced, data time-dependent data.
Moreover a custom deployment for domain specific use cases must be possible in order to maintain the flexibility of the system

Requirements

To maximize the range of possible users the web was used as a target platform. It is widely spread, most users are familiar to it, it is available on many devices and supports classical 3D visualization as well as 3D and VR.
It also supports a wider range of user interactions. Ranging from usual keyboard and mouse interaction it also supports touch devices and more elaborate VR controllers.
Since the planned system would build on a plethora of possible data sources a service oriented architecture (SOA) was proposed. Thus easing reuse of components and independent development of services.

Architecture

The architecture of the final system consisted of three layers.
The first layer was data acquisition. It delivered data from social sensing components as well as participatory data collection.
The second layer was models and simulation. It used epidemic simulation web services as well as integrated computational models.
The third layer was the exploratory layer. This layer allows for the visual exploration of the combined data sources from the first two layers.
CIMPLEX three layer architecture
CIMPLEX three layer architecture

Technologies

Visualization Framework

The Visualization Framework was developed in a cooperation between the University of Stuttgart and the DFKI. The highly customizable web-based open-source framework allows to connect to different web services, and to analyze the data with a large variety of interactive visualizations that are connected via brushing and linking. The web-based applications created with the framework run on multiple devices such as smartphones, tablets, desktop computers and large display walls.

The Visualization Framework runs on a multitude of devices e.g. display walls

An interactive live demo can be found here: https://github.com/cimplex-project/visualization-framework

Globe Library

DFKI developed a WebGL-based standalone library to visualize the different simulation data on a 3D globe in the browser. The library contains a lightweight interface to create a3D globe that supports 2D and 3D projections as well as custom tilesets with dynamic zoom levels with the ability to add, remove, and change transitions and basin values efficiently and in real-time at run time. The library makes use of available platform specific input methods e.g. touches on mobile, where the user is able to use pinch and drag gestures to interact with the globe. It utilizes Fastlane, a JavaScript library for data-flow base parallel data processing for the web, which is being developed from DFKI in collaboration with an industry partner in related project. The library offers an abstraction over low-level APIs that expose hardware parallelism (SIMD.js, GLSL), thus making enabling its use even by non-experts. It is used, in particular, to implement the real-time processing of huge datasets for visualization with WebGL.

A high performance JavaScript library for visualizing data on an interactive globe

A interactive live demo can be found here: https://cimplex-project.github.io/cimplex-globe.

Decoder Library

DFKI in collaboration with ISI developed a user-friendly JavaScript open source library for fast data exchange. The library utilizes parallel data processing (web workers) and offers an asynchronous interface. It is able to create, remove and decode simulation data using the GleamViz web service.

The GLEAMViz-decoder library acts as a middleware between simulation services and the Visualization Framework.

The GLEAMViz web service hosts simulation data in various formats. All of them contain compressed binary data for fast data exchange and that needs to be decoded on the client in order to be processed and visualized. By using parallel data processing the decoding time can be greatly reduced. Thus,the library offers decoders for GLEAMViz from ISI and agent based model datasets from ISI and FBK, including movement data from the DFKI.

Configurator

We also have developed a configurator application that tremendously reduces the time to configure and deploy a custom version of the visualization framework including all data and simulation services. Based on a web page URL, a user is now able to select, which views, data and simulation services he wants to include in his custom deployment. The server backend then creates a unique executable file depending on the selections, and offers it for downloading. The downloaded executable, based on node.js and Docker, then fully automatically installs all dependencies, including data and simulation services, locally on the client machine. By using Docker, the installation is isolated and does not affect or modify the client host system. The configurator creates executables for Windows, Mac OSX and Linux.

The three different steps of the Web-based CIMPLEX Configurator. The user is able to select which views, data and simulation services he wants to deploy locally.

Supplemental

  • Title: Bringing of Citizen, model and Data together in Participatory, Interactive Social Exploratories
  • Run Time: 01.01.2015 – 31.12.2017
  • Funding:
    • FET Proactive Global Systems Science (GSS)
    • Grant agreement no: 641191
  • Partners
    • Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Germany
    • Eidgenössische Technische Hochschule Zürich, Switzerland
    • Universität Stuttgart, Germany
    • University College London, United Kingdom
    • Közép-európai Egyetem (Central European University), Hungary
    • Fondazione Istituto per l’Interscambio Scientifico, Italy
    • Consiglio Nazionale delle Ricerche, Italy
    • Fondazione Bruno Kessler, Italy
  • Media
    • YouTube-Channel
      • https://www.youtube.com/channel/UC0pWNHUogTZRHuAspF-5WKw
    • Video Globe
      • https://www.youtube.com/channel/UC0pWNHUogTZRHuAspF-5WKw

– Hybr-iT

Hybr-iT

Hybrid and intelligent human-robot collaboration – hybrid teams in versatile, cyber-physical production environments

The aim of the Hybr-iT joint research project funded by the Federal Ministry of Education and Research (BMBF) is to build and test hybrid teams of humans and robots working together with software-based assistance systems in intelligent environments in industrial manufacturing. Based on a holistic approach to the various disciplines of human-robot collaboration, intelligent planning and simulation environments, assistance systems and knowledge-based robotics, workers in the production process are supported by robots in such a way that this intensive human-robot cooperation is convenient, safe and efficient.


Hybr-iT researches and evaluates the components required for planning and optimizing hybrid teams in an industrial context – in terms of their integration in existing IT and production systems and as necessary for their control in a production operation. From an IT perspective, this will involve heavily distributed systems with very heterogeneous subsystems (such as plant and robot controls, safety, logistic, database, assistance, tracking, simulation, and visualization systems), which are implemented together in a comprehensive resource oriented architecture (ROA). ASR contributes to the ROA and develops the simulation environment for hybrid human-robot teams, using AJAN and Motion Synthesis.

The Hybr-iT project is funded by the Federal Ministry of Education and Research.

Ansprechpartner: Ingo Zinnikus