Search

Navigation and service

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
23.Nov.2017 13:00
end
24.Nov.2017 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Juelich provides two major high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Advanced parallel programming with MPI and OpenMP" @ JSC

begin
27.Nov.2017 09:00
end
29.Nov.2017 16:30
venue
JSC, Jülich

This course is given in English.

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced)

Training course "Fortran for Scientific Computing" @ HLRS

begin
27.Nov.2017 09:00
end
01.Dec.2017 15:30
venue
HLRS Stuttgart

This course is dedicated for scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PATC training course "Node-level performance engineering" @ LRZ

begin
30.Nov.2017 09:00
end
01.Dec.2017 17:00
venue
LRZ Garching

This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.

The course is a PRACE Advanced Training Center event.

PATC training course "Parallel and Scalable Machine Learning" @ JSC

begin
15.Jan.2018 09:00
end
17.Jan.2018 16:30
venue
JSC, Jülich

The course offers basics of analyzing data with machine learning and data mining algorithms in order to understand foundations of learning from large quantities of data. This course is especially oriented towards beginners that have no previous knowledge of machine learning techniques. The course consists of general methods for data analysis in order to understand clustering, classification, and regression. This includes a thorough discussion of test datasets, training datasets, and validation datasets required to learn from data with a high accuracy. Easy application examples will foster the theoretical course elements that also will illustrate problems like overfitting followed by mechanisms such as validation and regularization that prevent such problems.

The tutorial will start from a very simple application example in order to teach foundations like the role of features in data, linear separability, or decision boundaries for machine learning models. In particular this course will point to key challenges in analyzing large quantities of data sets (aka ‘big data’) in order to motivate the use of parallel and scalable machine learning algorithms that will be used in the course. The course targets specific challenges in analyzing large quantities of datasets that cannot be analyzed with traditional serial methods provided by tools such as R, SAS, or Matlab. This includes several challenges as part of the machine learning algorithms, the distribution of data, or the process of performing validation. The course will introduce selected solutions to overcome these challenges using parallel and scalable computing techniques based on the Message Passing Interface (MPI) and OpenMP that run on massively parallel High Performance Computing (HPC) platforms. The course ends with a more recent machine learning method known as deep learning that emerged as a promising disruptive approach, allowing knowledge discovery from large datasets in an unprecedented effectiveness and efficiency.

This course is a PATC course (PRACE Advanced Training Centres).

PATC training course "Introduction to hybrid programming in HPC" @ LRZ

begin
18.Jan.2018 10:00
end
18.Jan.2018 17:00
venue
LRZ Garching

Most HPC systems are clusters of shared memory nodes. Such SMP nodes can be small multi-core CPUs up to large many-core CPUs. Parallel programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.

Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

The course is a PRACE Advanced Training Center event.

Training course "Programming the new KNL Cluster at LRZ" @ LRZ

begin
24.Jan.2018 09:00
end
25.Jan.2018 17:00
venue
LRZ Garching

The course will focus on how to program and use the new KNL cluster CoolMUC3 at LRZ.

Topics covered include:

  • The new CoolMUC3 KNL cluster at LRZ
  • Overview of the Intel MIC architecture
  • Overview of the Intel MIC programming models
  • KNL memory modes and cluster modes, MCDRAM
  • MKL on KNL
  • Vectorisation and Intel Xeon Phi performance optimisation
  • Intel tools for KNL

Training course "Introduction and training Intel KNL Many-Core - usage and profiling" @ JSC

begin
01.Feb.2018 00:00
end
01.Feb.2018 00:00
venue
JSC, Jülich

The Research Centre Juelich will extend its general purpose supercomputing system JURECA with a so-called "Booster" based on Intel's KNL architecture. The aim of this course is to give users of the supercomputing resources an introductory overview of the KNL architecture and its usage, and to help them in making efficient use of their allocated resources.
Topics covered include

  • Overview of the KNL architecture
  • Building code for KNL
  • Analysing code correctness and performance
  • Roof-line and vectorisation analysis
  • Performance tuning
  • "Bring-your-own-code" hands-on sessions
  • special topics depending on users' interest

Date has not yet been fixed. Probably 3 days in February.

Training course "Parallel Programming with MPI, OpenMP, and Tools" @ ZIH, Dresden

begin
12.Feb.2018 08:30
end
16.Feb.2018 12:30
venue
ZIH Dresden

The focus is on programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. The last part is dedicated to tools. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by ZIH in collaboration with HLRS. (Content Level: 70% for beginners, 30% advanced)

Training course "Programming with Fortran" @ LRZ

begin
14.Feb.2018 09:00
end
16.Feb.2018 18:00
venue
LRZ Garching

This course is targeted at scientists with little or no knowledge of the Fortran programming language, but need it for participation in projects using a Fortran code base, for development of their own codes, and for getting acquainted with additional tools like debugger and syntax checker as well as handling of compilers and libraries. The language is for the most part treated at the level of the Fortran 95 standard; features from Fortran 2003 are limited to improvements on the elementary level. Advanced Fortran features like object-oriented programming or coarrays will be covered in a follow-on course in autumn.

To consolidate the lecture material, each day's approximately 4 hours of lecture are complemented by 3 hours of hands-on sessions.

Course participants should have basic UNIX/Linux knowledge (login with secure shell, shell commands, basic programming, vi or emacs editors).

Training course "Introduction to Python" @ JSC

begin
19.Feb.2018 09:00
end
21.Feb.2018 16:30
venue
JSC, Jülich

This course gives an introduction to the programming language Python. Topics are: data types, control structures, object-oriented programming, module usage. Additionally, Python's standard library and the GUI programming with wxWidgets will be explained.

This course is given in German.

Training course "Introduction to Computational Fluid Dynamics" @ ZIMT, Uni. Siegen

begin
19.Feb.2018 09:00
end
23.Feb.2018 16:00
venue
Universität Siegen, Adolf-Reichwein-Straße 2, 57076 Siegen, Building A, Room: AR-A1007

The course deals with current numerical methods for Computational Fluid Dynamics in the context of high performance computing. An emphasis is placed on explicit methods for compressible flows, but classical numerical methods for incompressible Navier-Stokes equations are also covered. A brief introduction to turbulence modelling is also provided by the course. Additional topics are high order numerical methods for the solution of systems of partial differential equations. The last day is dedicated to parallelization.

Hands-on sessions will manifest the contents of the lectures. In most of these sessions, the application Framework APES will be used. They cover grid generation using Seeder, visualization with ParaView and the usage of the parallel CFD solver Ateles on the local HPC system.

The course is organized by HLRS, IAG (University of Stuttgart) and STS, ZIMT (University of Siegen).

Training course "CFD with OpenFOAM®" @ HLRS

begin
05.Mar.2018 08:30
end
09.Mar.2018 15:30
venue
HLRS Stuttgart

OpenFOAM® is a widely-used open-source code and a powerful framework for solving a variety of problems mainly in the field of CFD. The five-day workshop gives an introduction to OpenFOAM® applied on CFD phenomena and is intended for beginners as well as for people with CFD background knowledge. The user will learn about case setup, meshing tools like snappyHexMesh and cfMesh. Available OpenFOAM® utilities and additional libraries like swak4Foam, that can be used for pre- and postprocessing tasks, are further aspects of this course. Additionally, basic solvers and major aspects of code structure are highlighted. Lectures and hands-on session with typical CFD examples will guide through this course including first steps in own coding. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PATC training course "Parallel I/O and Portable Data Formats" @ JSC

begin
12.Mar.2018 09:00
end
14.Mar.2018 16:30
venue
JSC, Jülich

Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF (NetCDF4 and PnetCDF) and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.

This course is a PRACE Advanced Training Centres (PATC) course.

Training course "Parallel Programming of High Performance Systems" @ RRZE Erlangen

begin
12.Mar.2018 09:00
end
16.Mar.2018 18:00
venue
RRZE Erlangen

This course, a collaboration of Erlangen Regional Computing Centre (RRZE) and LRZ, is targeted at students and scientists with interest in programming modern HPC hardware, specifically the large scale parallel computing systems available in Jülich, Stuttgart and Munich.

Each day is comprised of approximately 4 hours of lectures and 3 hours of hands-on sessions.

Day 1

  • Introduction to High Performance Computing (Weinberg)
  • Secure shell (Weinberg)
  • Source code versioning with SVN and GitLab (N.N.)
  • Handling console and GUI based interfaces (Weinberg)
  • Building programs with GNU MAKE (Weinberg)

Day 2

  • Basic parallel programming models: elements of MPI and OpenMP (Weinberg)
  • Processor architectures (Hager)

Day 3

  • Principles of code optimization: unrolling, blocking, dependencies, C++ issues, bandwidth issues, performance projections (Hager)
  • Advanced OpenMP programming (Weinberg)

Day 4

  • Parallel architectures: multi-core, multi-socket, ccNUMA, cache coherence and affinity, tools for handling memory affinity (Hager)
  • Parallel algorithms: data parallelism, domain decomposition, task parallelism, master-worker, granularity, load balancing, scalability models (Hager)
  • Advanced MPI programming (Wittmann)
  • Basics of software engineering (Navarrete)

Day 5

  • Intel tools: OpenMP performance and correctness (Wittmann)
  • Performance analysis with Score-P and Scalasca (Navarrete)
  • Intel tools: MPI tracing and Checking (Iapichino)
  • Intel VTune (Iapichino)

Training course "Introduction to ParaView for the visualization of scientific data" @ JSC

begin
15.Mar.2018 09:00
end
15.Mar.2018 16:30
venue
JSC, Jülich

This course is given in German.

ParaView ist eine auf dem Visualization Toolkit (VTK) basierende Open-Source Software,  mit der wissenschaftlich-technische Datensätze analysiert und visualisiert werden können. Unterstützt werden dabei alle gängigen Betriebssysteme (z.B. Windows, Linux, MAC OS X).  Da ParaView ein paralleles Konzept verfolgt, kann es außer auf einzelnen PCs, Laptops und Workstations auch in einem parallelen Modus auf Clustern und Supercomputern eingesetzt werden und eignet sich somit zur Visualisierung von sehr großen Datensätzen. Zudem ist ParaView in der Lage, Datensätze, die auf zentralen Daten- und  Visualisierungsservern abgespeichert sind, dort zu rendern und das erzeugte Bild auf der lokalen Workstation des Endnutzers anzuzeigen, ohne dass der eigentliche Datensatz zum Benutzer transferiert werden muss (remote rendering).

Im Kurs wird der Umgang mit ParaView anhand von Beispielen demonstriert.  Das Benutzerinterface wird erläutert, typische Eingabedatenformate werden behandelt und wichtige Visualisierungsmethoden vorgestellt. Ferner wird auf die Visualisierung von zeitabhängigen Daten, die Erstellung von Animationen und auf die parallele Version von ParaView eingegangen.

Training course "Iterative Linear Solvers and Parallelization" @ HLRS

begin
19.Mar.2018 08:30
end
23.Mar.2018 15:30
venue
HLRS Stuttgart

The focus is on iterative and parallel solvers, the parallel programming models MPI and OpenMP, and the parallel middleware PETSc. Thereby, different modern Krylov Subspace Methods (CG, GMRES, BiCGSTAB ...) as well as highly efficient preconditioning techniques are presented in the context of real life applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of iterative solvers, the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by HLRS, IAG, Uni. Kassel, and SFB/TRR30.

Training course "Introduction to parallel programming with MPI and OpenMP" @ JSC

begin
19.Mar.2018 09:00
end
22.Mar.2018 16:30
venue
JSC, Jülich

An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.

Knowledge of either C, C++ or Fortran, basic knowledge of UNIX/Linux and a UNIX standard editor (e.g. vi, emacs) is a prerequisite.

PATC training course "Advanced Topics in High Performance Computing" @ LRZ

begin
26.Mar.2018 09:00
end
29.Mar.2018 17:00
venue
LRZ Garching

In this add-on course to the parallel programming course special topics are treated in more depth, in particular performance analysis, I/O and PGAS concepts. It is provided in collaboration of Erlangen Regional Computing Centre (RRZE) and LRZ within KONWIHR.

The course is a PRACE Advanced Training Center event.

Each day is comprised of approximately 5 hours of lectures and 2 hours of hands-on sessions.

Day 1

  • Processor-Specific Optimization (Eitzinger)

Day 2

  • Parallel I/O with MPI IO (Wittmann)
  • SuperMUC Tour (Weinberg)
  • Tuning I/O on LRZ's HPC systems / I/O Profiling: Darshan tool (Mendez)

Day 3

  • Scientific Data Libraries: HDF5 / Scalable I/O library: SIONlib (Mendez)
  • Introduction into Intel Xeon Phi (KNL) Programming (Weinberg)

Day 4

  • PGAS programming with coarray Fortran and Unified Parallel C (Bader)

Prerequisites: Good MPI and OpenMP knowledge as presented in the course "Parallel programming of High Performance Systems".

Training course "Introduction to Parallel In-Situ Visualization" @ JSC

begin
01.Apr.2018 00:00
end
01.Apr.2018 00:00
venue
JSC, Jülich

VisIt is a distributed, parallel visualization and graphical analysis tool for data defined on two- and three-dimensional (2D and 3D) meshes. It lets you instrument your simulation code for in-situ visualization and analysis - visualization capabilities are added inside the simulation so it can visualize the data using the same level of resources as being used to calculate the data.

The course will cover an introduction and the basic aspects of VisIt as a visualization tool in general. In the second half we will focus on the use of VisIt in massive parallel environments. Especially the integration into existing simulation codes for in-situ visualization will be discussed. In a hands-on session the integration of VisIt will be demonstrated on an small parallel application.

Topics covered will include:

  • Introduction to VisIt
  • In-Situ Visualization with VisIt
  • Hands-on session

Date has not yet been fixed. Probably one day in April 2018.

Training course "From zero to hero: Understanding and fixing intra-node performance bottlenecks" @ JSC

begin
11.Apr.2018 09:00
end
12.Apr.2018 16:30
venue
JSC, Jülich

Modern HPC hardware has a lot of advanced and not easily accessible features that contribute significantly to the overall intra-node performance. However, many compute-bound HPC applications are historically grown to just use more cores and were not designed to utilize these features.

To make things worse, modern compiler cannot generate fully vectorized code automatically, unless the data structures and dependencies are very simple. As a consequence, such applications use only a low percentage of available peak performance. As scientists we therefore have the added responsibility to design generic data layouts and data access patterns to give the compiler a fighting chance to generate code utilizing most of the available hardware features. Such data layouts and access patterns are vital to utilize performance from vectorization/SIMDization. Generic algorithms like FFTs or basic linear algebra can be accelerated by using 3rd-party libraries and tools especially tuned and optimized for a multitude of different hardware configurations.

But what happens if your problem does not fall into this category and 3rd-party libraries are not available? This training course will shed some light on how the goal of utilizing on-core performance and ultimatively performance portability can be achieved.

In the first part of the training course we want to give insights in today's CPU microarchitecture and apply this knowledge in the hands-on session. As a demonstrator we will use a simple Coulomb solver and improve the code step-by-step. We will start from a basic implementation and advance to an optimized version using hardware features like vectorization to increase performance.

The exercises will also contain training on the use of open-source tools to measure and understand the achieved performance. Such optimizations, however, depend heavily on the targeted hardware and should not be part of the algorithmic layer of the code.

In the second part we will present a detailed description of possible abstraction layers to hide such hardware-specifics and therefore maintain readability and maintainability. We will also discuss the overhead costs of our introduced abstraction and show compile-time SIMD configurations and corresponding performance results on different platforms.

Some covered topics:

  • Inside a CPU: A scientists view on modern CPU microarchitecture
  • Datastructures: When to use SoA, AoS and AoSoA
  • Vectorization: SIMD on JURECA and JURECA Booster
  • Unrolling: Loop-unrolling for out-of-order execution and instruction-level parallelism
  • Data Reuse: Register file and cache-blocking
  • Compiler: When and how to use compiler optimization flags

If you ever asked yourself one of the following questions, this course is for you.

  • What is the performance of my code and how fast could it actually be?
  • Why is my performance so bad?
  • Does my code use SIMD?
  • Why does my code not use SIMD and why does the compiler not help me?
  • Is my data-structure optimal for this architecture?
  • Do I need to redo everything for the next machine?
  • Why is this so complicated, I thought the science was the hard part?

The course consists of lectures and hands-on sessions. After each topic is presented, the participants can apply the knowledge right-away in the hands-on training. The C++ code examples are generic and advance step-by-step. Even if you do not speak C++, it will be possible to follow along and understand the underlying concepts.

PATC training course "GPU programming with CUDA" @ JSC

begin
23.Apr.2018 09:00
end
25.Apr.2018 16:30
venue
JSC, Jülich

GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications.

Topics covered will include:

  • Introduction to GPU/Parallel computing
  • Programming model CUDA
  • GPU libraries like CuBLAS and CuFFT
  • Tools for debugging and profiling
  • Performance optimizations

This course is a PATC course (PRACE Advanced Training Centres).

Training course "Cray XC40-Workshop on Scaling and Node-level Performance" @ HLRS

begin
23.Apr.2018 09:00
end
26.Apr.2018 16:30
venue
HLRS Stuttgart

In August 2015, the Cray XC40 supercomputer Hornet at HLRS was upgraded to a new system named “Hazel Hen" featuring 7724 compute nodes, each equipped with two 12 core Intel Haswell processors running at 2.5 GHz. Each node is equipped with 128 GB DDR4 memory and connected to the other nodes through the Cray Aries network. The peak performance amounts to 7.4 PFlops.

HLRS and Cray offer this workshop in order to help users running their codes on this new large system.

The course gives an overview on the XC40 system. Specialists from Cray will talk about the hardware, best practices, and the new software enhancements.

PATC training course "VI-HPS Tuning Workshop" @ LRZ

begin
23.Apr.2018 09:00
end
27.Apr.2018 18:00
venue
LRZ, Garching

This workshop organized by VI-HPS, LRZ and IT4Innovations for the PRACE Advanced Training Centre (PATC) at LRZ will:

  • give an overview of the VI-HPS programming tools suite
  • eexplain the functionality of individual tools, and how to use them effectively
  • offer hands-on experience and expert assistance using the tools

To foster the Czech-German collaboration in high performance computing, a contingent of places has been reserved for participants from the Czech Republic.

Programme Overview

Presentations and hands-on sessions are planned on the following topics (tbc.)

  • Setting up, welcome and introduction
  • Score-P instrumentation and measurement
  • Scalasca automated trace analysis
  • Vampir interactive trace analysis
  • Periscope/PTF automated performance analysis and optimisation
  • Extra-P automated performance modeling
  • Paraver/Extrae/Dimemas trace analysis and performance prediction
  • [k]cachegrind cache utilisation analysis
  • MAQAO performance analysis & optimisation
  • MUST runtime error detection for MPI
  • ARCHER runtime error detection for OpenMP
  • MAP+PR profiling and performance reports
  • A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments. The course is free of charge as the workshop is sponsored through the PRACE PATC program. All participants are responsible for their own travel and accommodation.

Participants are encouraged to prepare their own MPI, OpenMP and hybrid MPI+OpenMP parallel application codes for analysis.

Training course "Programming in C++" @ JSC

begin
14.May.2018 09:00
end
17.May.2018 16:30
venue
JSC, Jülich

C++ is a multi-paradigm programming language supporting procedural, object-oriented, generic and functional programming styles. This course will provide a practical introduction to C++, adhering to the latest official language standard at the time of the course.

The participants will study small example programs, each demonstrating a certain aspect of C++, and then do simple programming exercises using the lessons learned from the examples. The initial focus of the course will be to make the participants comfortable utilizing modern C++, e. g., solving small problems using the STL containers and algorithms along with lambda functions. Syntax will be explained in detail when needed. Once the participants are familiar and comfortable with the easy-to-use aspects of modern C++, the powerful abstraction mechanisms of the language, such as classes and class hierarchies, and templates will be presented at depth. It is hoped that this course will encourage fruitful application of the programming language and provide a good foundation for further learning.

It is assumed that the participants have previous programming experience in languages such as C, C++, Python, Java and Fortran. This course introduces programming in C++14 and C++17. It is not meant to be a beginners' introduction to programming.

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
28.May.2018 13:00
end
29.May.2018 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Jülich provides two major high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

PATC training course "High-performance scientific computing in C++" @ JSC

begin
11.Jun.2018 09:00
end
13.Jun.2018 16:30
venue
JSC, Jülich

Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance on contemporary computer hardware, with many levels of parallelism, requires understanding C++ code from a more performance centric viewpoint.

In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers, who are already familiar with C++14, and wish to develop maintainable and fast applications. They will learn to identify and avoid performance degrading characteristics, such as unnecessary memory operations, branch mispredictions, and unintentionally strong ordering assumptions. Two powerful open source libraries to help write structured parallel applications will also be introduced:

  • Intel (R) Threading Building Blocks
  • NVIDIA Thrust

This course is a PRACE Advanced Training Centres (PATC) course.

PATC training course "High-performance computing with Python" @ JSC

begin
18.Jun.2018 09:00
end
19.Jun.2018 16:30
venue
JSC, Jülich

Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

The following topics will be covered:

  • Interactive parallel programming with IPython
  • Profiling and optimization
  • High-performance NumPy
  • Just-in-time compilation with numba
  • Distributed-memory parallel programming with Python and MPI
  • Bindings to other programming languages and HPC libraries
  • Interfaces to GPUs

This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

This course is a PRACE Advanced Training Centres (PATC) course.

Training course "Introduction to parallel programming with MPI and OpenMP" @ JSC

begin
14.Aug.2018 09:00
end
17.Aug.2018 16:30
venue
JSC, Jülich

An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.

This course is mainly intended for guest students at JSC. Up to 15 additional participants can take part in the course after consulting Benedikt Steinbusch at JSC

Training course "Introduction to GPU programming using OpenACC" @ JSC

begin
01.Oct.2018
end
01.Oct.2018
venue
JSC, Jülich

GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to the GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the directive-based OpenACC programming model which allows for portable application development. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications.

Topics covered will include:

  • Introduction to GPU/Parallel computing
  • Programming model OpenACC
  • Interoperability of OpenACC with GPU libraries like CuBLAS and CuFFT
  • Multi-GPU Programming with MPI and OpenACC
  • Tools for debugging and profiling
  • Performance optimization

Date has not yet been fixed. Probably 2 days in October 2018.

Training course "Porting code from Matlab to Python" @ JSC

begin
08.Oct.2018 09:00
end
09.Oct.2018 16:30
venue
JSC, Jülich

Python is becoming a popular language for scientific applications and is increasingly used for high performance computing. In this course we want to introduce Matlab programmers to the usage of Python. Matlab and Python have a comparable language philosophy, but Python can offer better performance using its optimizations and parallelization interfaces. Python also increases the portability and flexibility (interaction with other open source and proprietary software packages) of solutions, and can be run on supercomputing resources without high licensing costs.

The training course will be divided into three stages: First, attendants will learn how to do a direct translation of language concepts from Matlab to Python. Then, optimization of scripts using more Pythonic data structures and functions will be shown. Finally, code will be taken to the supercomputers where basic parallel programming (MPI) will be used to exploit parallelism in the computation.

The course will focus on numerical and statistical analysis as well as on image processing applications.

This course involves theoretical and hands on sessions which will be guided by experts in Python, Matlab and High Performance Computing. Attendants are highly encouraged to bring their own Matlab scripts.

Training course "Software Development in Science" @ JSC

begin
19.Nov.2018 09:00
end
19.Nov.2018 16:30
venue
JSC, Jülich

Scientific research increasingly relies on software. Software engineering and development play a key role in the production of software. Thus, formal education and training in software development methodologies become more important, particularly in larger software projects. Software development in teams needs formalized processes to get a reliable outcome. The aim of this course is to give an introduction to established software development methodologies and best practices. The lessons learned in this workshop can be applied to large projects but will also help individual researchers to improve the quality of their software.

Topics covered are:

  • Overview of software development methodologies
  • Scrum and agile practices
  • Version control: hands-on training, working with Git and GitHub
  • Open source and community building
  • Licenses and copyright
  • Software testing and quality
  • Documentation

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
22.Nov.2018 13:00
end
23.Nov.2018 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Juelich provides two major high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Advanced parallel programming with MPI and OpenMP" @ JSC

begin
26.Nov.2018 09:00
end
28.Nov.2018 16:30
venue
JSC, Jülich

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations, e.g., with hybrid MPI+OpenMP parallelisation. The course ends with a section presenting tools for parallel programming.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 20% for beginners, 50% intermediate, 30% advanced).

Servicemeu