Tag Archives: Team Compiler Technology and High Performance Computing

– HP-DLF

HP-DLF: High Performance Deep Learning Framework

The goal of HP-DLF is to provide researchers and developers in the “deep learning” domain an easy access to current and future high-performance computing systems. For this purpose, a new software framework will be developed, which automates the highly complex parallel training of large neural networks on heterogeneous computing clusters. The focus is on scaling and energy efficiency, as well as high portability and user transparency. The goal is to scale the training of networks designed in existing frameworks, without additional user effort, over a three-digit number of compute nodes.

DLBB

DLBB: Deep Learning Building Blocks

DLBB is a project funded through the Cluster of Excellence on “Multimodal Computing and Interaction” (MMCI). The goal of the project is to research and define high-performance, cross-platform abstractions via meta-programming for deep learning frameworks. This will, in particular, include

  • how to describe the basic building blocks in a textbook-like style
  • how to combine and optimize a sequence of building blocks, and
  • how to run on different hardware (CPU, GPU, etc).

– ProThOS

ProThOS: Programmable Taskflow Oriented Operating System

ProThOS is a research project funded by the German Federal Ministry of Education and Research (BMBF) through a directive for funding for “basic research for HPC software in high-performance computing”. Parallelization in the exascale era is a major challenge not only from the perspective of a programming model but also to the execution environment: data dependencies are not recognized correctly, the execution overhead is too large, heterogeneity can not be used, etc. Efforts to address this issue in a smart intermediate layer fail due to the incurred overhead. ProThOS therefore brings programming and execution closer together and bases the data-flow-oriented programming language closely on the execution environment as well as the language constructs to the operating system. The language model remains C/C++ oriented and it will be shown that these principles can be mapped in an efficient way to heterogeneous infrastructures. By integration into the operating system, the execution overhead is drastically reduced. The DFKI researches and develops in ProThOS mainly the programming of such systems and investigates this on the basis of ray tracing and stencil pipelines.

Project website: https://manythreads.github.io/prothos

– Metacca

Metacca: Metaprogramming for Accelerators

Metacca is a research project funded by the German Federal Ministry of Education and Research (BMBF) through a directive for funding for “basic research for HPC software in high-performance computing”. The goal of Metacca is to extend the AnyDSL framework into a homogeneous programming environment for heterogeneous single- and multi-node systems. To this effect, the existing programming language and compiler will be extended by an expressive type system and language features enabling efficient programming of accelerators. Significant aspects of this extension concern the modeling of memory on heterogeneous devices, distribution of data to multiple compute nodes and improving the precision and power of the partial evaluation approach.

Within the project further support for distribution and synchronization for data-parallel programs will be built on top of these language enhancements as a library making use of AnyDSL’s partial evaluation features. Performance models and static analysis tools will be integrated into the AnyDSL tool chain to support development of applications and tuning of parameters. A runtime environment with built-in performance profiling will take care of resource management and system configuration. The resulting framework is evaluated using applications from bioinformatics and ray tracing. The target platforms are single heterogeneous nodes and clusters with several accelerators.

Project website: https://metacca.github.io

– REACT

The overall goal of REACT is a systematic, safe and validatable approach to developing, training and use of digital reality with the goal to ensure  safe and reliable acting autonomous systems – especially in critical situations. In order to reach this goal, we use methods and concepts of machine learning – especially
Deep Learning and (Deep) Reinforcement Learning (RL) – to learn lower-dimensional submodels of the real world. From these submodels we compile (semi) automatically complex, high-dimensional models in order to identify and simulate the entire range of critical situations. By means of  digital reality, we virtually synthesize  the otherwise missing sensor data of critical situations and train autonomous systems so that they are able to handle critical situations safe and confident.  The aim of the project is to enhance the capabilities of autonomous systems. Therefore we continuously and  systematically validate and  align  synthetic data with reality and adapt the models where necessary.

Continue reading – REACT

AnyDSL

AnyDSL – A Framework for Rapid Development of Domain-Specific Libraries

AnyDSL is a framework for domain-specific libraries (DSLs). These are implemented in our language Impala. In order to achieve high-performance, Impala partially evaluates any abstractions these libraries might impose. Partial evaluation and other optimizations are performed on AnyDSL’s intermediate representation Thorin.

More information can be found on the AnyDSL website: http://anydsl.github.io

MotionGraph

Morphable Graph [1] is a generative, graph-based approach for data-driven motion modeling and synthesis. Motion capture data is represented by a directed graph and motion synthesis tasks are converted to graph searching problem.

Continue reading MotionGraph

INTERACT

INTERACT – Interactive Manual Assembly Operations for the Human-Centered Workplaces of the Future

In order to be competitive in a global scale, European factories should be operated by a highly skilled workforce supported by advanced automation and IT tools. The European research project INTERACT aims to utilize workers’ knowledge on executing manual assembly tasks and include it in the digital tools used to support design, verification, validation, modification and continuous improvement of human-centred, flexible assembly workplaces.

Continue reading INTERACT

Intel Visual Computing Institute

The Intel Visual Computing Institute (Intel VCI) is a joint initiative by the Intel Corporation, Saarland University, the Max Planck Institute for Informatics, the Max Planck Institute for Software Systems and the German Research Center for Artificial Intelligence (DFKI).

Continue reading Intel Visual Computing Institute

INVERSIV

INVERSIV: Integrated Verification, Simulation and Visualization for Industrial Applications

Industry 4.0 is a main topic in the high-tech strategy of the German  government, aimed at enabling fundamental innovation in industry. The idea behind Industry 4.0 is that “driven by the Internet, the real and virtual worlds are growing closer and closer together to form the Internet of Things. Industrial production of the future will be characterized by the strong individualization of products under the conditions of highly flexible (large series) production, the extensive integration of customers and business partners in business and value-added processes, and the linking of production and high-quality services leading to so-called hybrid products” (BMBF). Together with the increasing requirements of high flexibility, reduced delivery time, and short product life cycles, the Industry 4.0 concept represents the highly dynamic, individualized, and networked environment of modern, digital factories.

There is a large number of challenges on the IT side for realizing Industry 4.0: (i) The high flexibility of production processes requires the ability to quickly redesign and adapt production lines and all supporting processes in a company. (ii) The high variability of products with small batch sizes requires novel, highly adaptable ways to monitor the production line for quality and errors while providing support and training for workers that adapts to the current situation. (iii) To support quick changes we must move from fixed, specialized networks and interfaces to flexible architectures and service interfaces that can easily be reconfigured and support the low-latency, high-volume communication needed in industrial environments.

The main objective of the INVERSIV project is the ability to build fully functional models of systems (such as production lines) and from those models derive the data to monitor, predict, and possibly suggest corrections to the operation of those systems based on live data from real systems (dual reality).

INVERSIV aims at processing and using realtime data streams in production scenarios for visualizing the state of production facilities, detecting failures and problematic situations, and propose and visualize appropriate maintenance and repair actions. In case an error situation has been detected (respectively predicted) actions to resolve the situation have to be planned. We will explore the setup and evaluation of alternative models in terms of hybrid automata and verify their proper functionality with an extended hybrid verification system. The planning stage will also explore maintenance repair actions generated by involving human or intelligent virtual characters, e.g. for installing an alternative model. This highlights again the need to have common data representation and communication mechanism between the modules (here, multi-agent planning and hybrid verification).


The INVERSIV project is funded by the Federal Ministry of Education and Research (FKZ 01IW14004).

Ansprechpartner: Ingo Zinnikus