Search

Navigation and service

PATC training course "Parallel I/O and Portable Data Formats" @ JSC

begin
12.Mar.2018 09:00
end
14.Mar.2018 16:30
venue
JSC, Jülich

Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF (NetCDF4 and PnetCDF) and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.

This course is a PRACE Advanced Training Centres (PATC) course.

PATC training course "OpenMP GPU Directives for Parallel Accelerated Supercomputers - an alternative to CUDA from Cray perspective" @ HLRS

begin
12.Mar.2018 09:00
end
13.Mar.2018 16:30
venue
HLRS Stuttgart

This workshop will cover the directive-based programming model based on OpenMP v4 whose multi-vendor support allows users to portably develop applications for parallel accelerated supercomputers. It also includes comparison to the predecessor interface OpenACC v2. The workshop will also demonstrate how to use the Cray Programming Environment tools to identify application bottlenecks, facilitate the porting, provide accelerated performance feedback and to tune the ported applications. The Cray scientific libraries for accelerators will be presented, and interoperability of the directives approach with these and with CUDA will be demonstrated. Through application case studies and tutorials, users will gain direct experience of using both OpenMP and OpenACC directives in realistic applications. Users may also bring their own codes to discuss with Cray specialists or begin porting. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves..

PATC training course "Advanced Topics in High Performance Computing" @ LRZ

begin
26.Mar.2018 09:00
end
29.Mar.2018 17:00
venue
LRZ Garching

In this add-on course to the parallel programming course special topics are treated in more depth, in particular performance analysis, I/O and PGAS concepts. It is provided in collaboration of Erlangen Regional Computing Centre (RRZE) and LRZ within KONWIHR.

The course is a PRACE Advanced Training Center event.

Each day is comprised of approximately 5 hours of lectures and 2 hours of hands-on sessions.

Day 1

  • Processor-Specific Optimization (Eitzinger)

Day 2

  • Parallel I/O with MPI IO (Wittmann)
  • SuperMUC Tour (Weinberg)
  • Tuning I/O on LRZ's HPC systems / I/O Profiling: Darshan tool (Mendez)

Day 3

  • Scientific Data Libraries: HDF5 / Scalable I/O library: SIONlib (Mendez)
  • Introduction into Intel Xeon Phi (KNL) Programming (Weinberg)

Day 4

  • PGAS programming with coarray Fortran and Unified Parallel C (Bader)

Prerequisites: Good MPI and OpenMP knowledge as presented in the course "Parallel programming of High Performance Systems".

PATC training course "Fortran for Scientific Computing" @ HLRS

begin
09.Apr.2018 08:30
end
13.Apr.2018 15:30
venue
HLRS Stuttgart

This course is dedicated to scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

This course is a PATC course (PRACE Advanced Training Centres).

PATC training course "VI-HPS Tuning Workshop" @ LRZ

begin
23.Apr.2018 09:00
end
27.Apr.2018 18:00
venue
LRZ, Garching

This workshop organized by VI-HPS, LRZ and IT4Innovations for the PRACE Advanced Training Centre (PATC) at LRZ will:

  • give an overview of the VI-HPS programming tools suite
  • eexplain the functionality of individual tools, and how to use them effectively
  • offer hands-on experience and expert assistance using the tools

To foster the Czech-German collaboration in high performance computing, a contingent of places has been reserved for participants from the Czech Republic.

Programme Overview

Presentations and hands-on sessions are planned on the following topics (tbc.)

  • Setting up, welcome and introduction
  • Score-P instrumentation and measurement
  • Scalasca automated trace analysis
  • Vampir interactive trace analysis
  • Periscope/PTF automated performance analysis and optimisation
  • Extra-P automated performance modeling
  • Paraver/Extrae/Dimemas trace analysis and performance prediction
  • [k]cachegrind cache utilisation analysis
  • MAQAO performance analysis & optimisation
  • MUST runtime error detection for MPI
  • ARCHER runtime error detection for OpenMP
  • MAP+PR profiling and performance reports
  • A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments. The course is free of charge as the workshop is sponsored through the PRACE PATC program. All participants are responsible for their own travel and accommodation.

Participants are encouraged to prepare their own MPI, OpenMP and hybrid MPI+OpenMP parallel application codes for analysis.

PATC training course "GPU programming with CUDA" @ JSC

begin
23.Apr.2018 09:00
end
25.Apr.2018 16:30
venue
JSC, Jülich

GPU-accelerated computing drives current scientific research. Writing fast numeric algorithms for GPUs offers high application performance by offloading compute-intensive portions of the code to an NVIDIA GPU. The course will cover basic aspects of GPU architectures and programming. Focus is on the usage of the parallel programming language CUDA-C which allows maximum control of NVIDIA GPU hardware. Examples of increasing complexity will be used to demonstrate optimization and tuning of scientific applications.

Topics covered will include:

  • Introduction to GPU/Parallel computing
  • Programming model CUDA
  • GPU libraries like CuBLAS and CuFFT
  • Tools for debugging and profiling
  • Performance optimizations

This course is a PATC course (PRACE Advanced Training Centres).

PATC training course "High-performance scientific computing in C++" @ JSC

begin
11.Jun.2018 09:00
end
13.Jun.2018 16:30
venue
JSC, Jülich

Modern C++, with its support for procedural, objected oriented, generic and functional programming styles, offers many powerful abstraction mechanisms to express complexity at a high level while remaining very efficient. It is therefore the language of choice for many scientific projects. However, achieving high performance on contemporary computer hardware, with many levels of parallelism, requires understanding C++ code from a more performance centric viewpoint.

In this course, the participants will learn how to write C++ programs which better utilize typical HPC hardware resources of the present day. The course is geared towards scientists and engineers, who are already familiar with C++14, and wish to develop maintainable and fast applications. They will learn to identify and avoid performance degrading characteristics, such as unnecessary memory operations, branch mispredictions, and unintentionally strong ordering assumptions. Two powerful open source libraries to help write structured parallel applications will also be introduced:

  • Intel (R) Threading Building Blocks
  • NVIDIA Thrust

This course is a PRACE Advanced Training Centres (PATC) course.

PATC Training course "Node-level performance engineering" @ HLRS

begin
14.Jun.2018 09:00
end
15.Jun.2018 17:00
venue
HLRS Stuttgart

This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

This course is a PATC course (PRACE Advanced Training Centres).

PATC training course "High-performance computing with Python" @ JSC

begin
18.Jun.2018 09:00
end
19.Jun.2018 16:30
venue
JSC, Jülich

Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

The following topics will be covered:

  • Interactive parallel programming with IPython
  • Profiling and optimization
  • High-performance NumPy
  • Just-in-time compilation with numba
  • Distributed-memory parallel programming with Python and MPI
  • Bindings to other programming languages and HPC libraries
  • Interfaces to GPUs

This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

This course is a PRACE Advanced Training Centres (PATC) course.

PATC training course "Concepts of GASPI and interoperability with other communication APIs" @ HLRS

begin
02.Jul.2018 09:00
end
03.Jul.2018 15:30
venue
HLRS Stuttgart

In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.
GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com).
GASPI is successfully used in academic and industrial simulation applications.
Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.
This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PATC training course "Introduction to Unified Parallel C (UPC) and Co-Array Fortran (CAF)" @ HLRS

begin
05.Jul.2018 08:30
end
06.Jul.2018 15:30
venue
HLRS Stuttgart

Partitioned Global Address Space (PGAS) is a new model for parallel programming. Unified Parallel C (UPC) and Co-array Fortran (CAF) are PGAS extensions to C and Fortran.UPC and CAF are language extensions to C and Fortran. Parallelism is part of the language. PGAS languages allow any processor to directly address memory/data on any other processors. Parallelism can be expressed more easily compared to library based approaches as MPI. This course gives an introduction to this novel approach of expressing parallelism. Hands-on sessions (in UPC and/or CAF) will allow users to immediately test and understand the basic constructs of PGAS languages. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PATC Training course "Advanced Fortran Topics" @ LRZ

begin
17.Sep.2018 09:00
end
21.Sep.2018 18:00
venue
LRZ Garching

This course, partly a PRACE Advanced Training Center (PATC) course (to be confirmed), is targeted at scientists who wish to extend their knowledge of Fortran to cover advanced features of the language.

Topics covered include:

Days 1-3:

  • Best Practices

    • global objects and interfaces
    • abstract interfaces and the IMPORT statement
    • object based programming
  • Object-Oriented Programming

    • type extension, polymorphism and inheritance
    • binding of procedures to types and objects
    • generic type-bound procedures
    • abstract types and deferred bindings
  • IEEE features and floating point exceptions
  • Interoperability with C

    • mixed language programming patterns
  • Fortran 2003 I/O extensions

Days 4-5 (PATC course, support by PRACE still has to be approved):

  • OO Design Patterns: application of object-oriented programming

    • creation and destruction of objects
    • polymorphic objects and function arguments
    • interacting objects
    • dependency inversion: submodules and plugins
  • Coarrays

    • PGAS concepts and coarray basics
    • dynamic entities
    • advanced synchronization
    • parallel programming patterns
    • recent enhancements: collectives, events, teams, atomic subroutines
    • performance aspects of coarray programming

To consolidate the lecture material, each day's approximately 4 hours of lecture are complemented by 3 hours of hands-on sessions. The last 2 days of the course are a PATC event (tbc).

Servicemeu