Search

Navigation and service

Training course "Simulation on High Performance Computers - Simulation" @ HLRS

begin
11.Mar.2019
end
07.Jun.2019
venue
HLRS Stuttgart

Electric cars, quieter airplanes, more efficient power plants — achieving many of today's technological goals is inconceivable without simulation. Fields in which such simulations are being applied are highly diverse, including the automobile industry, air and space travel, meteorology, wind energy and medicine, to name just a few.

High-performance computing (HPC) has opened the door for modern simulation and computational experimentation, and increasingly plays a decisive role in product development and design. Indeed, predicting the physical behavior of products is often so challenging that it can't be done without a supercomputer At the same time, however, running simulations on HPC systems is anything but trivial, and engineers using supercomputers face a number of complex challenges in doing so.

How should simulation processes be organized and and how can they be optimized in the context of high-performance computing? The module Simulation is designed to raise awareness of problems in designing simulations and to provide a basic understanding of the foundations of this methodology. The course will approach this goal from two perspectives: 1. Why should companies — particularly small and medium-sized enterprises (SMEs) — be using simulation? 2. How can high-performance computing help you when your simulations reach the limits of what is practical on smaller computing systems?

Module Contents

  • What is simulation?
  • The philosophy of simulation: from physical problem to model to result (the model cascade
  • Atomic simulations
  • Structural mechanics
  • CFD (computational fluid dynamics)
  • Statistical simulation (Monte Carlo simulation)
  • Optical simulations
  • Numerical methods
  • Simulation as a process
  • Recognizing errors
  • Deriving requirements from simulation experiments
  • Visualization

Flexible Learning

This course module is offered in a blended learning format, combining self-learning content and exercises with traditional classroom instruction. In this way you can structure your learning flexibly, balancing the time required for your continuing education with work and family responsibilities.

Self-learning components will be complemented by regular online meetings in a virtual classroom, which will take place on Monday evenings.

Time Requirement

The time requirement for each course module is approximately 125 hours, spread over 11 weeks. It consists of:

  • approximately 10 hours per module each week, including a weekly online meeting (Monday evenings)
  • two full-day classroom meetings in Stuttgart

Training course "Scientific Visualization" @ HLRS

begin
20.May.2019 09:00
end
21.May.2019 15:30
venue
HLRS Stuttgart

This two day course is targeted at researchers with basic knowledge in numerical simulation, who would like to learn how to visualize their simulation results on the desktop but also in Augmented Reality and Virtual Environments. It will start with a short overview of scientific visualization, following a hands-on introduction to 3D desktop visualization with COVISE. On the second day, we will discuss how to build interactive 3D Models for Virtual Environments and how to set up an Augmented Reality visualization. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PRACE training course "HPC code optimisation workshop" @ LRZ

begin
20.May.2019 09:00
end
22.May.2019 17:00
venue
LRZ Garching

In the ever-growing complexity of computer architectures, code optimization has become the main route to keep pace with hardware advancements and effectively make use of current and upcoming High Performance Computing systems.

Have you ever asked yourself:

  • Where does the performance of my application lay?
  • What is the maximum speed-up achievable on the architecture I am using?
  • Is my implementation matching the HPC objectives?

In this workshop, we will answer these questions and provide a unique opportunity to learn techniques, methods and solutions on how to improve code, how to enable the new hardware features and how to use the roofline model to visualize the potential benefits of an optimization process.

We will begin with a description of the latest micro-processor architectures and how the developers can efficiently use modern HPC hardware, in particular the vector units via SIMD programming and AVX-512 optimization and the memory hierarchy.

The attendees are then conducted along the optimization process by means of hands-on exercises and learn how to enable vectorization using simple pragmas and more effective techniques, like changing data layout and alignment.

The work is guided by the hints from the Intel® compiler reports, and using Intel® Advisor.

NEW: this year the workshop will consist of three days. We will dedicate most of the third day to the Intel Math Kernel Library (MKL), in order to show how to gain performance through the use of libraries.

We provide also an N-body code, to support the described optimization solutions with practical hands-on.

The course is a PRACE training event.

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
20.May.2019 13:00
end
21.May.2019 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Jülich provides high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Introduction to Intel FPGA Programming Models" @ LRZ

begin
21.May.2019 09:00
end
21.May.2019 17:00
venue
LRZ Garching

FPGAs can help accelerate many of the core data center workloads that process the growing volume of data that our hyper-connected world creates. They can be reprogrammed in a fraction of a second with a datapath that exactly matches your workload’s key algorithms. This versatility results in a higher performing, more power efficient, and well utilized data center – lowering your total cost of ownership. FPGAS can be connected directly to processors, memories, networks, and numerous other interfaces. Traditionally, FPGAs require deep domain expertise to program for, but Intel is investing in significantly simplifying the development flow and enable rapid deployment across the data center.

This full day course offered by Intel in cooperation with LRZ is a high-level overview of FPGAs with the intention of level setting people on what they are, why they are so important as accelerators, what their programming models are and how easily they can be adopted into compute clusters through the use of the Acceleration Stack for Intel® Xeon® CPU with FPGAs. This course contains both lecture and lab exercises to help gain familiarity with these concepts using the tools available for FPGA developers such as Quartus, Platform Designer, High Level Synthesis, OpenCL, and DSP Builder.

Training course "Introduction to Deep Learning Models" @ JSC

begin
21.May.2019 13:00
end
23.May.2019 16:30
venue
Jülich Supercomputing Centre

This course focuses on a recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency. It is particularly relevant in research areas, which are not accessible through modelling and simulation often performed in HPC. Traditional learning, which was introduced in the 1950s and became a data-driven paradigm in the 90s, is usually based on an iterative process of feature engineering, learning, and modelling. Although successful on many tasks, the resulting models are often hard to transfer to other datasets and research areas.

This course provides an introduction into deep learning and its inherent ability to derive optimal and often quite generic problem representations from the data (aka ‘feature learning’). Concrete architectures such as Convolutional Neural Networks (CNNs) will be applied to real datasets of applications using known deep learning frameworks such as Tensorflow, Keras, or Torch. As the learning process with CNNs is extremely computational-intensive the course will cover aspects of how parallel computing can be leveraged in order to speed-up the learning process using general purpose computing on graphics processing units (GPGPUs). Hands-on exercises allow the participants to immediately turn the newly acquired skills into practice. After this course participants will have a general understanding for which problems CNN learning architectures are useful and how parallel and scalable computing is facilitating the learning process when facing big datasets.

PRACE training course "OpenMP GPU Directives for Parallel Accelerated Supercomputers - an alternative to CUDA from Cray perspective" @ HLRS

begin
22.May.2019 09:00
end
23.May.2019 16:30
venue
HLRS Stuttgart

This workshop will cover the directive-based programming model based on OpenMP v4 whose multi-vendor support allows users to portably develop applications for parallel accelerated supercomputers. It also includes comparison to the predecessor interface OpenACC v2. The workshop will also demonstrate how to use the Cray Programming Environment tools to identify application bottlenecks, facilitate the porting, provide accelerated performance feedback and to tune the ported applications. The Cray scientific libraries for accelerators will be presented, and interoperability of the directives approach with these and with CUDA will be demonstrated. Through application case studies and tutorials, users will gain direct experience of using both OpenMP and OpenACC directives in realistic applications. Users may also bring their own codes to discuss with Cray specialists or begin porting. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves..

Training course "Introduction to Intel FPGA Programming Models" @ JSC

begin
23.May.2019 09:00
end
23.May.2019 17:00
venue
JSC, Forschungszentrum Jülich

FPGAs can help accelerate many of the core data center workloads that process the growing volume of data that our hyper-connected world creates. They can be reprogrammed in a fraction of a second with a datapath that exactly matches your workload’s key algorithms. This versatility results in a higher performing, more power efficient, and well utilized data center – lowering your total cost of ownership. FPGAS can be connected directly to processors, memories, networks, and numerous other interfaces. Traditionally, FPGAs require deep domain expertise to program for, but Intel is investing in significantly simplifying the development flow and enable rapid deployment across the data center.

This full day course offered by Intel is a high-level overview of FPGAs with the intention of level setting people on what they are, why they are so important as accelerators, what their programming models are and how easily they can be adopted into compute clusters through the use of the Acceleration Stack for Intel® Xeon® CPU with FPGAs. This course contains both lecture and lab exercises to help gain familiarity with these concepts using the tools available for FPGA developers such as Quartus, Platform Designer, High Level Synthesis, OpenCL, and DSP Builder.

PRACE training course "High-performance scientific computing in C++" @ JSC

begin
27.May.2019 09:00
end
29.May.2019 16:30
venue
JSC, Jülich

Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance on contemporary computer hardware, with many levels of parallelism, requires understanding C++ code from a more performance centric viewpoint.

In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers, who are already familiar with C++14, and wish to develop maintainable and fast applications. They will learn to identify and avoid performance degrading characteristics, such as unnecessary memory operations, branch mispredictions, and unintentionally strong ordering assumptions. Two powerful open source libraries to help write structured parallel applications will also be introduced:

  • Intel (R) Threading Building Blocks
  • NVIDIA Thrust

This course is a PRACE training course.

PRACE training course "Deep Learning and GPU programming workshop" @ LRZ

begin
03.Jun.2019 09:30
end
06.Jun.2019 17:00
venue
LRZ Garching

Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC.

This new 4-days workshop offered for the first time at LRZ combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC.

The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Since 2012 LRZ as part of GCS is one of currently 10 PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences.

NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

All instructors are NVIDIA certified University Ambassadors.

Training course "Introduction to hybrid programming in HPC" @ TU Wien, Vienna

begin
12.Jun.2019 09:00
end
13.Jun.2019 16:30
venue
TU Wien

Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

Hands-on sessions are included on both days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. This course is organized by VSC (Vienna Scientific Cluster). in cooperation with HLRS and RRZE.

Training course "Advanced C++ with Focus on Software Engineering" @ LRZ

begin
12.Jun.2019 09:00
end
14.Jun.2019 17:00
venue
LRZ Garching

This advanced C++ training is a course on object-oriented (OO) software design with the C++ programming language. The focus of the training are the essential OO and C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code. The course will not address special areas and applications of C++, such as for instance Template Meta Programming (TMP), or the quirks and curiosities of the C++ language. It rather teaches guidelines to develop mature, robust, and maintainable C++ code. The following topics will be covered:

Day 1 schedule:

  • Essential Object-Oriented Design Principles

    • The core of object-oriented programming
    • The SOLID principles
  • Concepts and the STL

    • Overview of the STL
    • Proper use of the STL
  • Class Design

    • Know what your compiler does for you
    • Inside/Outside: What should (not) be inside a class?

Day 2 schedule:

  • Class Design (cont.)

    • Const Correctness
    • Interface design
    • Visibility vs. Accessibility
  • Robust Code

    • Error propagation
    • Exception Safety
    • RAII
    • Handling Legacy Code
  • Proper Use of Dynamic Inheritance

    • Non-public inheritance
    • Public inheritance

Day 3 schedule:

  • Dependency-Breaking Techniques
  • Non-Intrusive Design
  • C++11/14 Update
  • Kernel development

Contents are subject to modifications.

PRACE training course "High-performance computing with Python" @ JSC

begin
17.Jun.2019 09:00
end
19.Jun.2019 16:30
venue
JSC, Jülich

Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

The following topics will be covered:

  • Interactive parallel programming with IPython
  • Profiling and optimization
  • High-performance NumPy
  • Just-in-time compilation with numba
  • Distributed-memory parallel programming with Python and MPI
  • Bindings to other programming languages and HPC libraries
  • Interfaces to GPUs

This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

This course is a PRACE training course.

PRACE training course "VI-HPS Tuning Workshop" @ JSC

begin
24.Jun.2019 09:00
end
28.Jun.2019 16:30
venue
Jülich Supercomputing Centre

This workshop organized by VI-HPS and JSC as a PRACE training event will:

  • give an overview of the VI-HPS programming tools suite
  • explain the functionality of individual tools, and how to use them effectively
  • offer hands-on experience and expert assistance using the tools

The detailed program will be available on the VI-HPS training web site.

Presentations and hands-on sessions are planned on the following topics

  • Setting up, welcome and introduction
  • Score-P instrumentation and measurement
  • Scalasca automated trace analysis
  • TAU performance system
  • Vampir interactive trace analysis
  • Extra-P automated performance modeling
  • Paraver/Extrae/Dimemas trace analysis and performance prediction
  • MAQAO performance analysis & optimisation
  • MUST runtime error detection for MPI
  • ARCHER runtime error detection for OpenMP
  • JUBE script-based workflow execution environment
  • ... and potentially others to be added

A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

This course is a PRACE training course.

Training course "Cluster Workshop" @ HLRS

begin
25.Jun.2019 09:30
end
26.Jun.2019 17:00
venue
HLRS Stuttgart

Modern compute clusters have become an important part of the IT infrastructure for research and development in many companies and institutions. The procurement, operation, and efficient usage of such parallel systems introduce new and complex requirements.

To address these issues, the High-Performance Computing Center Stuttgart (HLRS) will hold a vendor-independent workshop that provides an introduction to cluster systems and the particular challenges they raise. Topics covered will include the design of compute clusters, as well as details on hardware components, operating systems, file systems, and modes of operation, as well as some examples of software solutions. Furthermore, typical problems that cluster operators encounter will be discussed, along with strategies for solving them.

As Germany's first national supercomputing center, HLRS has operated compute clusters for many years, enabling simulation for a wide range of scientific and industrial applications. We maintain constant dialogue with users and hardware providers and have accumulated a large knowledge base in cluster computing.

PRACE Training course "Node-level performance engineering" @ HLRS

begin
27.Jun.2019 09:00
end
28.Jun.2019 17:00
venue
HLRS Stuttgart

This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

This course is a PRACE training course.

PRACE training course "Efficient Parallel Programming with GASPI" @ HLRS

begin
01.Jul.2019 09:00
end
02.Jul.2019 15:30
venue
HLRS Stuttgart

In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.

GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com).
GASPI is successfully used in academic and industrial simulation applications.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.
This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PRACE training course "Introduction to Parallel Programming with HPX" @ HLRS

begin
04.Jul.2019 09:00
end
05.Jul.2019 15:30
venue
HLRS Stuttgart

The aim of this course is to introduce participants to the HPX library (http://stellar-group.org/libraries/hpx/ and https://github.com/STEllAR-GROUP/hpx and http://stellar-group.org/) and demonstrate how it can be used to write task based programs. The HPX library implements a lightweight threading model that allows both concurrent, asynchronous, parallel and distributed programming constructs to coexist within the same application with a consistent API based on C++ standards and using futures to synchronize between tasks.

The course is aimed at participants with a good understanding of C++. The material covered will include an introduction to the HPX programming model; asynchronous programming concepts, execution policies and executors; parallel algorithms using tasks (including the parallel STL); writing distributed applications with HPX; profiling and debugging and a final section introducing heterogeneous programming using targets for GPU devices.

Objective: The attendee will gain an understanding of the HPX library and task based programming in general.

This course is a PRACE training course.

Training course "Advanced C++ with Focus on Software Engineering" @ HLRS

begin
09.Jul.2019 08:30
end
12.Jul.2019 16:30
venue
HLRS Stuttgart

This advanced C++ training is a course on object-oriented (OO) software design with the C++ programming language. The focus of the training are the essential OO and C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code. Additionally, the course gives insight into kernel development with C++. The course will not address special areas and applications of C++, such as for instance Template Meta Programming (TMP), or the quirks and curiosities of the C++ language. It rather teaches guidelines to develop mature, robust, maintainable, and efficient C++ code.

After this course, participants will:

  • have a detailed understanding of the essential OO design principles
  • have gained knowledge about fundamental C++ programming concepts and idioms
  • be able to properly design classes and class interfaces
  • know about the importance of exception safe programming
  • have gained insight into kernel development with C++
  • avoid the usual pitfalls in the context of inheritance
  • comprehend the advantages of non-intrusive design
  • understand the virtue of clean code

Training course "Deep Learning and GPU programming workshop" @ HLRS

begin
15.Jul.2019 09:00
end
17.Jul.2019 17:00
venue
HLRS Stuttgart

NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips and how to accelerate your applications with OpenACC.

The workshop combines lectures about fundamentals of Deep Learning for Computer Vision and Multiple Data Types with a lecture about Accelerated Computing with OpenACC.

The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

This workshop is organized in cooperation with LRZ (Germany) and Nvidia. All instructors are NVIDIA certified University Ambassadors.

Training course "Introduction to parallel programming with MPI and OpenMP" @ JSC

begin
12.Aug.2019 09:00
end
16.Aug.2019 16:30
venue
JSC, Jülich

An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.

The first four days of the course consist of lectures and short exercises. An optional fifth day is devoted to demonstrating the use of MPI and OpenMP in a larger context. To this end, starting from a simple but representative serial algorithm, a parallel version will be designed and implemented using the techniques presented in the course.

This course is mainly intended for guest students at JSC. Up to 15 additional participants can take part in the course after consulting Benedikt Steinbusch at JSC

Training course "Parallelization with MPI and OpenMP" @ ETH Zürich

begin
19.Aug.2019 08:30
end
22.Aug.2019 17:15
venue
ETH Zürich, Sonneggstrasse 5, Zürich, Switzerland

The aim of this course is to give people with some programming experience an introduction into the parallel programming models MPI and OpenMP. It starts on beginners level but also includes advanced features of the current standards. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP.

The first two days are an introduction to MPI and OpenMP, which includes a deep introduction to nonblocking MPI communication, and also touches newer OpenMP-4.0 - 5.0 features, such as the vectorization directives, thread affinity and OpenMP places.

The last two days are dedicated to advanced methods in MPI, e.g., the group and communicator concept, process topologies, derived data types, and one-sided communication. This course also includes latest features of MPI-3.0/3.1, e.g., the new MPI-3.0 shared memory programming model within MPI, the new Fortran language binding, nonblocking collectives, and neighborhood communication. Hybrid MPI+OpenMP programming is also addressed, as well as the parallelization of implicit and explicit solvers, which also includes a short tutorial about PETSc.

Content level: 40% for beginners, 30% intermediate, 30% advanced.

Training course "CFD with OpenFOAM®" @ ZIMT, University of Siegen

begin
02.Sep.2019 08:30
end
06.Sep.2019 15:30
venue
ZIMT, University of Siegen, Hölderlinstr. 3, Building D, Room: H-D 2202, D-57076 Siegen, Germany

OpenFOAM® is a widely-used open-source code and a powerful framework for solving a variety of problems mainly in the field of CFD. The five-day workshop gives an introduction to OpenFOAM® applied on CFD phenomena and is intended for beginners as well as for people with CFD background knowledge. The user will learn about case setup, meshing tools like snappyHexMesh and cfMesh. Available OpenFOAM® utilities and additional libraries like swak4Foam, that can be used for pre- and postprocessing tasks, are further aspects of this course. Additionally, basic solvers and major aspects of code structure are highlighted. Lectures and hands-on session with typical CFD examples will guide through this course including first steps in own coding.

This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

The course is organized by ZIMT, University of Siegen in cooperation with HLRS (University of Stuttgart).

Training course "Introduction to ANSYS Fluid Dynamics (CFX, Fluent) on LRZ HPC Systems" @ LRZ

begin
02.Sep.2019 09:00
end
06.Sep.2019 17:00
venue
LRZ Garching

The focus of this 5-day course is targeted on researchers with good knowledge in the fundamentals of fluid mechanics and potentially with some first experience in Computational Fluid Dynamics (CFD). The course will focus on the introduction to the ANSYS Fluid Dynamics software packages, i.e. ANSYS CFX and ANSYS Fluent. Further, participants will be familiarized with the main steps of the typical CFD workflow, in particular with CFD preprocessing/ CFD setup creation, serial and parallel solver execution and CFD postprocessing in both CFD solver systems CFX and Fluent. Correctness of boundary conditions and CFD setup specifications, solver convergence control, solver monitoring, customization capabilities of the solvers and the post­pro­cessing as well as recommended CFD best practices are covered.

The course further focusses on the usage of the ANSYS CFD software in a typical Linux cluster environment for massively parallel computations. This includes a basic Linux primer, introduction to LRZ HPC systems and network environment, intro to the use of schedulers like Slurm and LoadLeveler, CFD remote visualization and aspects of successful CFD simulation strategies in such an HPC environment. Finally some aspects of workflow automation using Python as scripting language are targeted as well.

What participants will not learn in this course?

  • Advanced aspects of Linux and computer network infrastructure
  • Geometry creation (CAD, SpaceClaim, DM) and meshing
  • Advanced topics of CFD simulation, like e.g. acoustics, Eulerian and Lagrangian multiphase flows, combustion, radiation, FSI etc.
  • Advanced topics of CFD solver customization with User FORTRAN or User Defined Functions (UDF’s) written in C language.

Training course "Introduction to Computational Fluid Dynamics" @ HLRS

begin
09.Sep.2019 09:00
end
13.Sep.2019 15:30
venue
HLRS Stuttgart

The course deals with current numerical methods for Computational Fluid Dynamics in the context of high performance computing. An emphasis is placed on explicit methods for compressible flows, but classical numerical methods for incompressible Navier-Stokes equations are also covered. A brief introduction to turbulence modelling is also provided by the course. Additional topics are high order numerical methods for the solution of systems of partial differential equations. The last day is dedicated to parallelization.

Hands-on sessions will manifest the contents of the lectures. In most of these sessions, the application Framework APES will be used. They cover grid generation using Seeder, visualization with ParaView and the usage of the parallel CFD solver Ateles on the local HPC system.

The course is organized by HLRS, IAG (University of Stuttgart) and STS, ZIMT (University of Siegen).

Training course "Advanced Fortran Topics" @ LRZ

begin
09.Sep.2019 09:00
end
13.Sep.2019 18:00
venue
LRZ Garching

This course, partly a PRACE training event (to be confirmed), is targeted at scientists who wish to extend their knowledge of Fortran to cover advanced features of the language.

Topics covered include:

Days 1-3:

  • Best Practices

    • global objects and interfaces
    • abstract interfaces and the IMPORT statement
    • object based programming
  • Object-Oriented Programming

    • type extension, polymorphism and inheritance
    • binding of procedures to types and objects
    • generic type-bound procedures
    • abstract types and deferred bindings
  • IEEE features and floating point exceptions
  • Interoperability with C

    • mixed language programming patterns
  • Fortran 2003 I/O extensions

Days 4-5 (PRACE training course, support by PRACE still has to be approved):

  • OO Design Patterns: application of object-oriented programming

    • creation and destruction of objects
    • polymorphic objects and function arguments
    • interacting objects
    • dependency inversion: submodules and plugins
  • Coarrays

    • PGAS concepts and coarray basics
    • dynamic entities
    • advanced synchronization
    • parallel programming patterns
    • recent enhancements: collectives, events, teams, atomic subroutines
    • performance aspects of coarray programming

To consolidate the lecture material, each day's approximately 4 hours of lecture are complemented by 3 hours of hands-on sessions. The last 2 days of the course are a PRACE training event (tbc).

Training course "Iterative linear solvers and parallelization" @ LRZ

begin
16.Sep.2019 08:30
end
20.Sep.2019 15:30
venue
LRZ Garching

The focus of this compact course is on iterative and parallel solvers, the parallel programming models MPI and OpenMP, and the parallel middleware PETSc. Different modern Krylov Subspace Methods (CG, GMRES, BiCGSTAB ...) as well as highly efficient preconditioning techniques are presented in the context of real life applications.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the

  • basic constructs of iterative solvers
  • the Message Passing Interface (MPI)
  • the shared memory directives of OpenMP.

This course is organized by the University of Kassel, the high performance computing centre of Stuttgart (HLRS) and IAG.

Training course "Introduction to Python" @ JSC

begin
07.Oct.2019 08:30
end
09.Oct.2019 16:30
venue
JSC, Jülich

This course gives an introduction to the programming language Python. Topics are: data types, control structures, object-oriented programming, module usage. Additionally, Python's standard library and the GUI programming with wxWidgets will be explained.

This course is given in German.

Training course "Porting code from Matlab to Python" @ JSC

begin
07.Oct.2019 09:00
end
08.Oct.2019 16:30
venue
JSC, Jülich

Python is becoming a popular language for scientific applications and is increasingly used for high performance computing. In this course we want to introduce Matlab programmers to the usage of Python. Matlab and Python have a comparable language philosophy, but Python can offer better performance using its optimizations and parallelization interfaces. Python also increases the portability and flexibility (interaction with other open source and proprietary software packages) of solutions, and can be run on supercomputing resources without high licensing costs.

The training course will be divided into three stages: First, attendants will learn how to do a direct translation of language concepts from Matlab to Python. Then, optimization of scripts using more Pythonic data structures and functions will be shown. Finally, code will be taken to the supercomputers where basic parallel programming (MPI) will be used to exploit parallelism in the computation.

The course will focus on numerical and statistical analysis as well as on image processing applications.

This course involves theoretical and hands on sessions which will be guided by experts in Python, Matlab and High Performance Computing. Attendants are highly encouraged to bring their own Matlab scripts.

Training course "Introduction to Semantic Patching of C programs with Coccinelle" @ LRZ

begin
08.Oct.2019 10:00
end
08.Oct.2019 17:00
venue
LRZ Garching

"Coccinelle is a program matching and transformation engine which provides the language SmPL (Semantic Patch Language) for specifying desired matches and transformations in C code. Coccinelle was initially targeted towards performing collateral evolutions in Linux. Such evolutions comprise the changes that are needed in client code in response to evolutions in library APIs, and may include modifications such as renaming a function, adding a function argument whose value is somehow context-dependent, and reorganizing a data structure. Beyond collateral evolutions, Coccinelle is successfully used (by us and others) for finding and fixing bugs in systems code." (http://coccinelle.lip6.fr/)

This training introduces the Coccinelle semantic matching engine for C programs.

Target audience are medium to advanced C programmers interested in restructuring large scale programs.

Applications in HPC will be mentioned, examples will be given.

Tentative program:

  • Invoking Coccinelle.
  • Semantic matching.
  • Semantic patching.
  • Elements of the SmPL language.
  • Simple SmPL rules.
  • Rules with inheritance.
  • Reusing rules.
  • Scripting.
  • Review of common transformations and use cases in HPC.
  • Hands-on exercises.

Contents are subject to modifications.

Training course "Parallel Programming Workshop (MPI, OpenMP and advanced topics)" @ HLRS

begin
14.Oct.2019 08:30
end
18.Oct.2019 16:30
venue
HLRS Stuttgart

Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners):
On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

Shared memory parallelization with OpenMP (Tue, for beginners):
The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

Intermediate and advanced topics in parallel programming (Wed-Fri):
Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Training course "Scientific Visualization" @ HLRS

begin
24.Oct.2019 09:00
end
25.Oct.2019 15:30
venue
HLRS Stuttgart

This two day course is targeted at researchers with basic knowledge in numerical simulation, who would like to learn how to visualize their simulation results on the desktop but also in Augmented Reality and Virtual Environments. It will start with a short overview of scientific visualization, following a hands-on introduction to 3D desktop visualization with COVISE. On the second day, we will discuss how to build interactive 3D Models for Virtual Environments and how to set up an Augmented Reality visualization. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Training course "Introduction to GPU programming using OpenACC" @ JSC

begin
28.Oct.2019
end
29.Oct.2019
venue
JSC, Jülich

GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to the GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the directive-based OpenACC programming model which allows for portable application development. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications.

Topics covered will include:

  • Introduction to GPU/Parallel computing
  • Programming model OpenACC
  • Interoperability of OpenACC with GPU libraries like CuBLAS and CuFFT
  • Multi-GPU Programming with MPI and OpenACC
  • Tools for debugging and profiling
  • Performance optimization

Training course "From zero to hero, Part II: Understanding and fixing intra-node performance bottlenecks" @ JSC

begin
05.Nov.2019 09:00
end
06.Nov.2019 16:30
venue
JSC, Jülich

Generic algorithms like FFTs or basic linear algebra can be accelerated by using 3rd-party libraries and tools especially tuned and optimized for a multitude of different hardware configurations. But what happens if your problem does not fall into this category and 3rd-party libraries are not available?

In Part I of this course we provided insights in today's CPU microarchitecture. As example applications we used a plain vector reduction and a simple Coulomb solver. We started from basic implementations and advanced to optimized versions using hardware features such as vectorization, unrolling and cache tiling to increase on-core performance. Part II sheds some light on achieving portable intra-node performance.

Continuing with the example applications from Part I, we use threading with C++11 std::thread to exploit multi-core parallelism and SMT (Simultaneous Multi-Threading). In this context, we discuss the fork-join model, tasking approaches and typical synchronization mechanisms.

To understand the parallel performance of memory-bound algorithms we take a closer look at the memory hierarchy and the parallel memory bandwidth. We consider data locality in the context of shared caches and NUMA (Non-Uniform Memory Access).

In this course we present several abstraction concepts to hide the hardware-specific optimizations. This improves readability and maintainability. We also discuss the overhead costs of the introduced abstractions and show compile-time SIMD configurations as well as corresponding performance results on different platforms.

Covered topics:

  • Memory Hierarchy: From register to RAM
  • Data structures: When to use SoA, AoS and AoSoA
  • Vectorization: SIMD on JURECA, JURECA Booster and JUWELS
  • Unrolling: Loop-unrolling for out-of-order execution and instruction-level parallelism
  • Separation of concerns: Decoupling hardware details from suitable algorithms

This course is for you if one of the following questions:

  • Why is my parallel performance so bad?
  • Why should I not be afraid of threads?
  • When should I use SMT (hyperthreading)?
  • What is NUMA and why does it hurt me?
  • Is my data structure optimal for this architecture?
  • Do I need to redo everything for the next machine?
  • Why is it that complicated, I thought science was the hard part?

The course consists of lectures and hands-on sessions. After each topic is presented, the participants can apply the knowledge right-away in the hands-on training. The C++ code examples are generic and advance step-by-step.

Training course "Software Development in Science" @ JSC

begin
19.Nov.2019 09:00
end
20.Nov.2019 16:30
venue
JSC, Jülich

Scientific research increasingly relies on software. Software engineering and development play a key role in the production of software. Thus, formal education and training in software development methodologies become more important, particularly in larger software projects. Software development in teams needs formalized processes to get a reliable outcome. The aim of this course is to give an introduction to established software development methodologies and best practices. The lessons learned in this workshop can be applied to large projects but will also help individual researchers to improve the quality of their software.

Topics covered are:

  • Overview of software development methodologies
  • Scrum and agile practices
  • Version control: hands-on training, working with Git and GitHub
  • Open source and community building
  • Licenses and copyright
  • Software testing and quality
  • Documentation

Training course "Advanced C++ with Focus on Software Engineering" @ LRZ

begin
20.Nov.2019 09:00
end
22.Nov.2019 17:00
venue
LRZ Garching

This advanced C++ training is a course on object-oriented (OO) software design with the C++ programming language. The focus of the training are the essential OO and C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code. The course will not address special areas and applications of C++, such as for instance Template Meta Programming (TMP), or the quirks and curiosities of the C++ language. It rather teaches guidelines to develop mature, robust, and maintainable C++ code. The following topics will be covered:

Day 1 schedule:

  • Essential Object-Oriented Design Principles

    • The core of object-oriented programming
    • The SOLID principles
  • Concepts and the STL

    • Overview of the STL
    • Proper use of the STL
  • Class Design

    • Know what your compiler does for you
    • Inside/Outside: What should (not) be inside a class?

Day 2 schedule:

  • Class Design (cont.)

    • Const Correctness
    • Interface design
    • Visibility vs. Accessibility
  • Robust Code

    • Error propagation
    • Exception Safety
    • RAII
    • Handling Legacy Code
  • Proper Use of Dynamic Inheritance

    • Non-public inheritance
    • Public inheritance

Day 3 schedule:

  • Dependency-Breaking Techniques
  • Non-Intrusive Design
  • C++11/14 Update
  • Kernel development

Contents are subject to modifications.

Training course "C++ Language for Beginners" @ LRZ

begin
25.Nov.2019 09:00
end
29.Nov.2019 17:00
venue
LRZ Garching

This four-day course gives an introduction to the C++ programming language. The following topics will be covered:

Day 1 schedule:

  • Reminder of C concepts
  • C++ Basics
  • C++ Pointers
  • Constructors and destructors
  • Classes methods and objects

Day 2 schedule:

  • Inheritance
  • Class Design
  • Namespaces

Day 3 schedule:

  • I/O operations
  • Strings
  • File management
  • Error handling and exceptions

Day 4 schedule:

  • C++ containers and iterators
  • Operators overloading
  • Modularity
  • Good coding practices

Contents are subject to modifications.

Training course "Advanced C++ with Focus on Software Engineering" @ HLRS

begin
26.Nov.2019 08:30
end
29.Nov.2019 16:30
venue
HLRS Stuttgart

This advanced C++ training is a course on object-oriented (OO) software design with the C++ programming language. The focus of the training are the essential OO and C++ software development principles, concepts, idioms, and best practices, which enable programmers to create professional, high-quality code. Additionally, the course gives insight into kernel development with C++. The course will not address special areas and applications of C++, such as for instance Template Meta Programming (TMP), or the quirks and curiosities of the C++ language. It rather teaches guidelines to develop mature, robust, maintainable, and efficient C++ code.

After this course, participants will:

  • have a detailed understanding of the essential OO design principles
  • have gained knowledge about fundamental C++ programming concepts and idioms
  • be able to properly design classes and class interfaces
  • know about the importance of exception safe programming
  • have gained insight into kernel development with C++
  • avoid the usual pitfalls in the context of inheritance
  • comprehend the advantages of non-intrusive design
  • understand the virtue of clean code

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
28.Nov.2019 13:00
end
29.Nov.2019 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Juelich provides high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Advanced parallel programming with MPI and OpenMP" @ JSC

begin
02.Dec.2019 09:00
end
04.Dec.2019 16:30
venue
JSC, Jülich

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced).

PRACE training course "Fortran for Scientific Computing" @ HLRS

begin
09.Dec.2019 08:30
end
13.Dec.2019 15:30
venue
HLRS Stuttgart

This course is dedicated to scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.