Search

Navigation and service

Training course "Efficient Parallel Programming with GASPI" @ HLRS

begin
30.Jan.2014 10:45
end
30.Jan.2014 16:00
venue
HLRS Stuttgart

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.
GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API.
GASPI is successfully used in academic and industrial simulation applications.
Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.

PATC training course "2nd JUQUEEN porting and tuning workshop" @ JSC

begin
03.Feb.2014 09:00
end
05.Feb.2014 17:00
venue
JSC, Jülich

The commissioning of the new Blue Gene/Q petaflop supercomputer JUQUEEN marks another quantum leap in supercomputer performance at JSC. At the same time, it is recognized that special efforts by the users are necessary in order to get the most out of this unique research tool.

The aim of this hands-on workshop is to support current users of JUQUEEN in porting their software, in analyzing its performance, and in improving its efficiency. It is highly recommended that project PIs send at least one expert on their code to this workshop.

This course is a PATC course (PRACE Advanced Training Centres).

Training course "Programming with Fortran" @ LRZ

begin
03.Feb.2014 09:00
end
07.Feb.2014 18:00
venue
LRZ Garching

This course is targeted at scientists with little or no knowledge of the Fortran programming language, but need it for participation in projects using a Fortran code base, for development of their own codes, and for getting acquainted with additional tools like debugger and syntax checker as well as handling of compilers and libraries. The language is for the most part treated at the level of the Fortran 95 standard; features from Fortran 2003 are limited to improvements on the elementary level. Advanced Fortran features like object-oriented programming or coarrays will be covered in a follow-on course in autumn.

To consolidate the lecture material, each day's approximately 4 hours of lecture are complemented by 3 hours of hands-on sessions.

Course participants should have basic UNIX/Linux knowledge (login with secure shell, shell commands, basic programming, vi or emacs editors).

Training course "Industrial Services of the National HPC Centre Stuttgart" @ HLRS

begin
19.Feb.2014 12:30
end
19.Feb.2014 17:00
venue
HLRS Stuttgart

In order to permanently assure their competitiveness, enterprises and institutions are increasingly forced to deliver highest performance. Powerful computers, among the best in the world, can reliably support them in doing so.

This course is targeted towards decision makers in companies that would like to learn more about the advantages of using high performance computers in their field of business. They will be given extensive information about the properties and the capabilities of the computers as well as access methods and security aspects. In addition we present our comprehensive service offering - ranging from individual consulting via training courses to visualization. Real world examples will finally allow an interesting insight into our current activities.

Training course "Parallel Programming (MPI, OpenMP, PETSc) and Tools" @ ZIH, Dresden

begin
24.Feb.2014 08:30
end
27.Feb.2014 16:30
venue
ZIH Dresden

The focus is on programming models MPI, OpenMP, and PETSc. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. The last day is dedicated to tools. This course is organized by ZIH in collaboration with HLRS. (Content Level: 70% for beginners, 30% advanced)

Training course "Parallelization with MPI and OpenMP" @ ZDV, Mainz

begin
04.Mar.2014 09:00
end
06.Mar.2014 16:30
venue
ZIH Dresden

The focus is on the programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by the University of Mainz in collaboration with HLRS. (Content Level: 70% for beginners, 30% advanced)

PATC training course "Fortran for Scientific Computing" @ HLRS

begin
10.Mar.2014 08:30
end
14.Mar.2014 15:30
venue
HLRS Stuttgart

This course is dedicated for scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

This course is a PATC course (PRACE Advanced Training Centres).

Training course "Parallel Programming of High Performance Systems" @ RRZE, Erlangen

begin
10.Mar.2014 09:00
end
14.Mar.2014 18:00
venue
LRZ Garching

This course, a collaboration of Erlangen Regional Computing Centre (RRZE) and LRZ, is targeted at students and scientists with interest in programming modern HPC hardware, specifically the large scale parallel computing systems available in Jülich, Stuttgart and Munich.

Each day is comprised of approximately 4 hours of lectures and 3 hours of hands-on sessions.

Day 1

  • Introduction to High Performance Computing (Weinberg)
  • Secure shell (Brietzke)
  • Source code versioning with SVN (Guillen)
  • Handling console and GUI based interfaces (Weinberg)
  • Building programs with GNU MAKE (Guillen)

Day 2

  • Basic parallel programming models: elements of MPI and OpenMP (Weinberg)
  • Processor architectures (Hager)

Day 3

Principles of code optimization: unrolling, blocking, dependencies, C++ issues, bandwidth issues, performance projections (Hager)
Basics of software engineering (Guillen)
Advanced MPI programming (Wittmann)

Day 4

  • Advanced OpenMP programming (Weinberg)
  • Performance Libraries (Weinberg)
  • Parallel architectures: multi-core, multi-socket, ccNUMA, cache coherence and affinity, tools for handling memory affinity (Treibig)
  • Parallel algorithms: data parallelism, domain decomposition, task parallelism, master-worker, granularity, load balancing, scalability models (Treibig)

Day 5

  • Processor-specific optimization strategies: compiler switches, avoiding cache thrashing, exploiting SIMD capabilities (Treibig)

PATC training course "Cray XE6/XC30 Optimization Workshop" @ HLRS

begin
17.Mar.2014 09:00
end
20.Mar.2014 16:30
venue
HLRS Stuttgart

HLRS installed HERMIT, a Cray XE6 system with AMD Interlagos processors and a performance of 1 PFlop/s. Currently, the system is extended by a Cray XC30 system. We invite current and future users to participate in this special course on porting applications to our Cray architectures. HERMIT provides our users with a new level of performance. To harvest this potential will require all our efforts. We are looking forward to working with our users on these opportunities.
The first three days, specialists from Cray will support you in your effort porting and optimizing your application on our Cray XE6/XC30.
On the fourth day, Georg Hager and Jan Treibig from RRZE will present detailed information on optimizing codes on the multicore AMD Interlagos and Intel Sandy Bridge processors.

Training course "Introduction to Python" @ JSC

begin
17.Mar.2014 09:00
end
19.Mar.2014 16:30
venue
JSC, Jülich

This course gives an introduction to the programming language Python. Topics are: data types, control structures, object-oriented programming, module usage. Additionally, Python's standard library and the GUI programming with wxWidgets will be explained.

Training course "Iterative Linear Solvers and Parallelization" @ HLRS

begin
24.Mar.2014 08:30
end
28.Mar.2014 15:30
venue
HLRS Stuttgart

The focus is on iterative and parallel solvers, the parallel programming models MPI and OpenMP, and the parallel middleware PETSc. Thereby, different modern Krylov Subspace Methods (CG, GMRES, BiCGSTAB ...) as well as highly efficient preconditioning techniques are presented in the context of real life applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of iterative solvers, the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by HLRS, IAG, Uni. Kassel, and SFB/TRR30.

Training course "Eclipse: C/C++/Fortran programming" @ LRZ

begin
25.Mar.2014 09:00
end
25.Mar.2014 15:30
venue
LRZ Garching

This course is targeted at scientists who wish to be introduced to programming C/C++/Fortran with the Eclipse C/C++ Development Tools (CDT), or the Photran Plugin. Topics covered include:

  • Introduction to Eclipse IDE
  • Introduction to CDT
  • Hands-on with CDT
  • Short introduction and demo of Photran

PATC training course "Advanced Topics in High Performance Computing" @ LRZ

begin
31.Mar.2014 09:00
end
03.Apr.2014 18:00
venue
LRZ Garching

In this add-on course to the parallel programming course special topics are treated in more depth, in particular performance analysis, I/O and PGAS concepts. It is provided in collaboration of Erlangen Regional Computing Centre (RRZE) and LRZ within KONWIHR.

Each day is comprised of approximately 5 hours of lectures and 2 hours of hands-on sessions.

Day 1

  • Intel tools: MPI tracing and Checking (Weinberg)
  • Intel tools: OpenMP performance and correctness (Weinberg)

Day 2

  • Parallel I/O with MPI IO (Wittmann)
  • Performance analysis with Scalasca (Navarrete)

Day 3

  • Tuning I/O on LRZ's HPC systems (Hammer)
  • Portability of I/O: Binary files NetCDF HDF5 (Hammer)

Day 4

  • PGAS programming with coarray Fortran and Unified Parallel C (Bader)
  • PGAS hands on session

Prerequisites: Good MPI and OpenMP knowledge as presented in the course "Parallel programming of High Performance Systems".

Training course "Introduction to Computational Fluid Dynamics" @ HLRS

begin
31.Mar.2014 10:00
end
04.Apr.2014 15:00
venue
HLRS Stuttgart

The course deals with current numerical methods for Computational Fluid Dynamics. The emphasis is placed on explicit finite volume methods for the compressible Euler equations. Moreover outlooks on implicit methods, the extension to the Navier-Stokes equations and turbulence modelling are given. Additional topics are classical numerical methods for the solution of the incompressible Navier-Stokes equations, Aeracoustics and high order numerical methods for the solution of systems of partial differential equations. The last day is dedicated to parallelization of explicit and implicit solvers.
Hands-on sessions will manifest the contents of the lectures. The emphasis of these session is put on the application of CFD codes, especially on grid generation, visualization and the interpretation of results. Furthermore the implementation of algorithms presented in the lectures points up the general structure of CFD codes.
The course is organized by the HLRS, the IAG and the University of Kassel. It is based on the course "Numerical Gasdynamics" held at the IAG which has been awarded the "Landeslehrpreis (prize for excellence in teaching) Baden-Württemberg 2003" (held at Uni. Stuttgart, under auspices of the BMBF project NUSS, contract 08NM227).

PATC training course "GPU programming" @ JSC

begin
07.Apr.2014 09:00
end
09.Apr.2014 16:30
venue
JSC, Jülich

Many-core programming is a very dynamic research area. Many scientific applications have been ported to GPU architectures in recent years. We will give an introduction to CUDA, OpenACC, OpenCL, and multi-GPU programming using examples of increasing complexity. After introducing the basics the focus will be on optimization and tuning of scientific applications. Topics covered will include:

  • Programming models: CUDA, OpenACC, OpenCL
  • Using libraries as interface for GPU programming (e.g. Thrust)
  • Partitioning and granularity of parallel applications
  • Debugging and profiling of CUDA kernels
  • Performance optimizations
  • Multi-GPU programming

This course is a PATC course (PRACE Advanced Training Centres).

Prerequisites: Knowledge of C

This course is a PATC course (PRACE Advanced Training Centres).

Training course "GPU Programming using CUDA" @ HLRS

begin
07.Apr.2014 12:30
end
09.Apr.2014 16:00
venue
HLRS Stuttgart

The course provides an introduction to the programming language CUDA which is used to write fast numeric algorithms for NVIDIA graphics processors (GPUs). Focus is on the basic usage of the language, the exploitation of the most important features of the device (massive parallel computation, shared memory, texture memory) and efficient usage of the hardware to maximize performance. An overview of the available development tools and the advanced features of the language is given.

PATC training course "OpenACC Programming for Parallel Accelerated Supercomputers - an alternative to CUDA from Cray perspective" @ HLRS

begin
10.Apr.2014 09:00
end
11.Apr.2014 16:30
venue
HLRS Stuttgart

This workshop will cover the programming environment of the Cray XK7 hybrid supercomputer, which combines multicore CPUs with GPU accelerators (http://www.cray.com/Products/Computing/XK7.aspx). Attendees will learn about the directive-based OpenACC programming model (http://www.openacc-standard.org), whose multi-vendor support allows users to portably develop applications for parallel accelerated supercomputers.
The workshop will also demonstrate how to use the Cray Programming Environment tools to identify CPU application bottlenecks, facilitate the OpenACC porting, provide accelerated performance feedback and to tune the ported applications. The Cray scientific libraries for accelerators will be presented, and interoperability of OpenACC directives with these and with CUDA will be demonstrated. Through application case studies and tutorials, users will gain direct experience of using OpenACC directives in realistic applications.
Users may also bring their own codes to discuss with Cray specialists or begin porting.

PATC training course "Unified Parallel C (UPC) and Co-Array Fortran (CAF)" @ HLRS

begin
14.Apr.2014 08:30
end
15.Apr.2014 15:30
venue
HLRS Stuttgart

Partitioned Global Address Space (PGAS) is a new model for parallel programming. Unified Parallel C (UPC) and Co-array Fortran (CAF) are PGAS extensions to C and Fortran. UPC and CAF are language extensions to C and Fortran. Parallelism is part of the language. PGAS languages allow any processor to directly address memory/data on any other processors. Parallelism can be expressed more easily compared to library based approaches as MPI. This course gives an introduction to this novel approach of expressing parallelism. Hands-on sessions (in UPC and/or CAF) will allow users to immediately test and understand the basic constructs of PGAS languages.

Training course "Scientific Visualization" @ HLRS

begin
16.Apr.2014 09:00
end
17.Apr.2014 15:30
venue
HLRS Stuttgart

This two day course is targeted at researchers with basic knowledge in numerical simulation, who would like to learn how to visualize their simulation results on the desktop but also in Augmented Reality and Virtual Environments. It will start with a short overview of scientific visualization, following a hands-on introduction to 3D desktop visualization with COVISE. On the second day, we will discuss how to build interactive 3D Models for Virtual Environments and how to set up an Augmented Reality visualization.

PATC training course "Intel MIC&GPU Programming Workshop" @ LRZ

begin
28.Apr.2014 09:00
end
30.Apr.2014 18:00
venue
LRZ Garching

With the rapidly growing demand for computing power new accelerator based architectures have entered the world of high performance computing since around 5 years. Particularly GPGPUs have recently become very popular, however programming GPGPUs using programming languages like CUDA or OpenCL is cumbersome and error-prone. Beyond introducing the basics of GPGPU-porogramming, we mainly present OpenACC as an easier way to program GPUs using OpenMP-like pragmas. Recently Intel developed their own Many Integrated Core (MIC) architecture which can be programmed using standard parallel programming techniques like OpenMP and MPI. In the beginning of 2013, the first production-level cards named Intel Xeon Phi came on the market. The course discusses various programming techniques for Intel Xeon Phi and includes hands-on session for both MIC and GPU programming. The course is developed in collaboration with the Erlangen Regional Computing Centre (RRZE) within KONWIHR.

Each day is comprised of approximately 5 hours of lectures and 2 hours of hands-on sessions.

Training course "Advanced GPU programming" @ JSC

begin
05.May.2014 09:00
end
06.May.2014 16:30
venue
JSC, Jülich

Today's computers are commonly equipped with multicore processors and graphics processing units. To make efficient use of these massively parallel compute resources advanced knowledge of architecture and programming models is indispensable. This course builds on the introduction to GPU programming. It focuses on finding and eliminating bottlenecks using profiling and advanced programming techniques, optimal usage of CPUs and GPUs on a single node, and multi-GPU programming across multiple nodes.
The material will be presented in the form of short lectures followed by in-depth hands-on sessions.

Training course "Introduction to OpenFOAM" @ LRZ

begin
12.May.2014 09:00
end
14.May.2014 17:00
venue
LRZ Garching

This three-day introductory course into OpenFOAM® is intended for new users who want to learn the basic concepts of its usage and want to know how to modify existing applications or add new functionalities.
Among others the course covers the topics:

  • Introduction to working with Linux
  • OpenFOAM® file structure and case setup
  • Setting up and running simulations
  • Evaluating and visualizing results with ParaView
  • Selection of numerical methods
  • Creating and converting meshes
  • Computing in parallel and acceleration on GPUs
  • Implementation of new methods
  • Advanced topics in OpenFOAM®

The course is held from 9:00 to 17:00 each day and will be comprised of approximately 4 hours of lectures and 3 hours of hands-on sessions, the language is English.

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
19.May.2014 13:00
end
20.May.2014 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Juelich provides two major high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Efficient Parallel Programming with GASPI" @ JSC

begin
19.May.2014 13:30
end
19.May.2014 17:00
venue
JSC, Jülich

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com). GASPI is successfully used in academic and industrial simulation
applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

PATC training course "Parallel I/O and Portable Data Formats" @ JSC

begin
21.May.2014 09:00
end
23.May.2014 16:30
venue
JSC, Jülich

Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

This course will introduce the use of parallel I/O (MPI I/O) and the HDF5 as well as the netCDF and the SIONlib library interfaces. Participants should have experience in parallel programming with MPI, and either C/C++ or Fortran in particular.

This course is a PATC course (PRACE Advanced Training Centres).

Training course "Programming in C++ for C programmers" @ JSC

begin
02.Jun.2014 09:00
end
12.Jun.2014 16:30
venue
JSC, Jülich

C++ is a multi-paradigm programming language supporting procedural, object-oriented, generic and functional programming styles. In this course, the current standard of the language, C++11, will be introduced to participants familiar with C. Minor changes expected in the revision C++14 will be also be introduced.

The course will run in two parts: 2-4 June and 10-12 June 2014. The first half will introduce the C++ (C++11) syntax. Through a number of simple but instructive exercises, the participants will learn the C++ syntax and familiarise themselves with elements of object oriented, generic and functional programming. The Standard Template Library for C++11 will be introduced in sufficient detail to be useful.

The second half will be about graphics, Boost libraries and multicore
performance. Brief introductions will be given to :

  • graphical user interfaces using Qt5.
  • Boost C++ libraries
  • Intel (R) Threading Building Blocks

This course is designed for participants with previous programming experience, and introduces the current standard of C++. It cannot serve as a beginners' introduction to programming.

Training course "Large Scale debugging with Allinea DDT" @ LRZ

begin
02.Jun.2014 09:00
end
02.Jun.2014 16:00
venue
LRZ Garching

This workshop is targeted at SuperMUC users and HPC code developers that need to do large scale debugging on the system. It will provide guidelines and best practices for doing this in the talk section, and will also provide a hands-on section where people can either do debugging on example codes, or try out the learned debugging strategies on their own codes.

Users must have an existing account on the SuperMUC system.
Good knowledge of compilers and HPC languages and parallelization concepts (MPI, OpenMP) is required.

Training course "High-performance computing with Python" @ JSC

begin
26.Jun.2014 09:00
end
27.Jun.2014 16:30
venue
JSC, Jülich

Python is being increasingly used in high-performance computing projects such as GPAW. It can be used either as a high-level interface to existing HPC applications, as embedded interpreter, or directly.

This course combines lectures and hands-on session. We will show how Python can be used on parallel architectures and how performance critical parts of the kernel can be optimized using various tools.

Day 1: Using Python productively for parallel computing

  • Interactive parallel programming with IPythonpandas
  • High-performance NumPy and SciPy
  • ‘Scalable Python’ on JUQUEEN
  • mpi4py

Day 2: Python in concert with other programming languages and accelerators

  • Cython
  • f2py
  • PyCUDA
  • PyOpenCL
  • Numba
  • Pythran

This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

Training course "Introduction to SuperMUC - the new Petaflop Supercomputer at LRZ" @ LRZ

begin
08.Jul.2014 10:00
end
11.Jul.2014 17:00
venue
LRZ Garching

This four-day workshop gives an introduction to the usage of the new Petaflop class Supercomputer at LRZ, SuperMUC. The first three days of this are dedicated to presentations by Intel on their software development stack (compilers, tools and libraries); the remaining day will be comprised of talks and exercises delivered by IBM and LRZ on usage of the IBM-specific aspects of the new system (IBM MPI, LoadLeveler, HPC Toolkit) and recommendations on tuning and optimizing for the new system.

PATC Training course "Node-level performance engineering" @ HLRS

begin
14.Jul.2014 09:00
end
15.Jul.2014 17:00
venue
HLRS Stuttgart

This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.

Training course "User-Guided Optimization in High-Level Languages" @ HLRS

begin
16.Jul.2014 09:30
end
16.Jul.2014 17:00
venue
HLRS Stuttgart

While writing code in High-level languages, HPC-programmers often have a notion of gainful optimization strategies, that should be applied to their code. Nevertheless, nowadays compilers generally use heuristics to decide on optimization strategies. To lower this gap, we will present new ideas and tools to raise the programmers control over compiler optimisations, i.e.

1) Noise - trigger optimizations by annotations
2) Sierra - SIMD computations with compound data types
3) AnyDSL - create DSLs with associated optimization strategies

All of these tools will also be applied in hands-on sessions. In addition, an overview of available optimization strategies will be given by compiler experts.

Noise and Sierra are limited to C/C++, even though the underlying concepts ought to be applicable also to other imperative languages, particularly Fortran. AnyDSL uses a dialect of the Rust programming language.

Training course "Industrial Services of the National HPC Centre Stuttgart" @ HLRS

begin
16.Jul.2014 12:30
end
16.Jul.2014 16:00
venue
HLRS Stuttgart

In order to permanently assure their competitiveness, enterprises and institutions are increasingly forced to deliver highest performance. Powerful computers, among the best in the world, can reliably support them in doing so.

This course is targeted towards decision makers in companies that would like to learn more about the advantages of using high performance computers in their field of business. They will be given extensive information about the properties and the capabilities of the computers as well as access methods and security aspects. In addition we present our comprehensive service offering - ranging from individual consulting via training courses to visualization. Real world examples will finally allow an interesting insight into our current activities.

Training course "Introduction to parallel programming with MPI and OpenMP for JSC guest students" @ JSC

begin
05.Aug.2014 09:00
end
08.Aug.2014 16:30
venue
JSC, Jülich

An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.

Knowledge of Fortran, C or C++ is a prerequisite. The course is intended for guest students at JSC. Additional participants can take part in the course after consulting Florian Janetzko at JSC.

PATC Training course "Advanced Fortran Topics" @ LRZ

begin
08.Sep.2014 08:30
end
12.Sep.2014 18:00
venue
LRZ Garching

This course is targeted at scientists who wish to extend their knowledge of Fortran beyond what is provided in the Fortran 95 standard. Some other tools relevant for software engineering are also discussed. Topics covered include

  • object oriented features
  • design patterns
  • generation and handling of shared libraries
  • mixed language programming
  • standardized IEEE arithmetic and exceptions
  • I/O extensions from Fortran 2003
  • parallel programming with coarrays
  • source code versioning system (subversion)

To consolidate the lecture material, each day's approximately 4 hours of lecture are complemented by 3 hours of hands-on sessions.

Training course "Iterative linear solvers and parallelization" @ LRZ

begin
15.Sep.2014 08:30
end
19.Sep.2014 15:30
venue
LRZ Garching

The focus of this compact course is on iterative and parallel solvers, the parallel programming models MPI and OpenMP, and the parallel middleware PETSc.

Different modern Krylov Subspace Methods (CG, GMRES, BiCGSTAB ...) as well as highly efficient preconditioning techniques are presented in the context of real life applications.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the

  • basic constructs of iterative solvers
  • the Message Passing Interface (MPI)
  • the shared memory directives of OpenMP.

This course is organized by the University of Kassel, the high performance computing centre of Stuttgart (HLRS) and IAG.

CECAM tutorial: Atomistic Monte Carlo Simulations of Bio-molecular Systems @ JSC

begin
15.Sep.2014 12:00
end
19.Sep.2014 13:30
venue
Jülich Supercomputing Centre

Cellular function arises from the dynamics of biomolecules. While fast dynamics can be treated on the quantum or molecular mechanics level using molecular dynamics, many biological processes are too slow to be amenable to simulation by molecular dynamics, which is currently limited to the microsecond time-scale (10-6 – 10-5 s). This is often referred to as the time-scale problem of molecular dynamics.

Atomistic Markov Chain Monte Carlo (MCMC) is an interesting and complementary approach to studying long time scale phenomena like protein folding and peptide aggregation.

The main objectives of this tutorial are to provide researchers with a solid background knowledge of the principal characteristics and capabilities of atomistic MCMC simulations and to introduce them to practical MCMC simulation using the software package ProFASi developed by the lecturers of this tutorial.

ProFASi is an open source software for MCMC simulation of biomolecules and provides a versatile toolkit for using modern Monte Carlo methods such as the replica exchange or the multi-canonical methods. ProFASi is fast enough to fold some small helical proteins within a minute which makes it a good tool for this tutorial. It has been successfully applied to study long time scale processes like protein folding, peptide aggregation and the dynamics of intrinsically unstructured proteins. A recent highlight has been simulating the folding of the 92 amino acid protein Top7, a process operating at a time scale of 1 second.

The three practical afternoon session will introduce the ProFASi package at sufficient depth for productive use. Using HPC clusters of the Jülich Supercomputing Centre the participants will perform increasingly sophisticated simulations and learn how to apply MCMC simulation to various areas of biomolecular research.

The tutorial will highlight commonalities and differences of atomistic MCMC simulations to other simulation techniques and discuss both their advantages and limitations to allow the attendants to judge where MCMC simulations may be helpful in their research and where not.

This CECAM Tutorial is organized by Sandipan Mohanty, Jan Meinke and Olav Zimmermann (JSC) and will take place at Forschungszentrum Jülich, Jülich Supercomputing Centre.

PATC training course "New XC30, Parallel I/O, and Optimization (3 Courses)" @ HLRS

begin
23.Sep.2014 09:00
end
26.Sep.2014 17:30
venue
HLRS Stuttgart

During August 2014 HLRS and Cray will install “HORNET” the follow-up to HERMIT. HORNET is a Cray XC30 with Intel “Haswell” processors and a peak performance of 3.7 PFlop/s.

The first day is targeted to HERMIT users who will continue their work on HORNET. Specialists from Cray will talk about the hardware and software enhancements between the XE6 and XC30 in order to support your migration towards this new machine.

The second day is dedicated to user parallel IO at scale.

The third and fourth day covers a 2 day introduction workshop about porting and optimizing your application for the Cray XC30. The topics of this workshop include the Cray Programming Environment, scientific libraries and profiling tools.

PATC training course "Parallel Programming Workshop: Distributed and Shared Memory Parallelization with MPI and OpenMP" @ HLRS

begin
13.Oct.2014 08:30
end
17.Oct.2014 16:30
venue
HLRS Stuttgart

Distributed memory parallelization with the Message Passing Interface MPI (Mon+Tue, for beginners):
On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into MPI-1. Further aspects are domain decomposition, load balancing, and debugging. An MPI-2 overview and the MPI-2 one-sided communication is also taught. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

Shared memory parallelization with OpenMP (Wed, for beginners):
The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

Advanced topics in parallel programming (Thu+Fri):
Topics are MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, MPI-3.0, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI.

Hands-on sessions are included on all days.

Training course "Introduction to parallel computing" @ JSC

begin
15.Oct.2014 13:00
end
15.Oct.2014 17:00
venue
JSC, Jülich

This course will present the most fundamental concepts, methods, and technologies of high-performance computing systems and the necessary parallel programming associated with it. After introducing the basic terminology and vocabulary, an overview about parallel computer architectures is given introducing shared memory, distributed memory, and hybrid computer systems including latest trends like many-core CPUs and hardware acceleration via GPUs. Next, the basics of parallel programming are explained including an introduction to message-passing and multi-threaded programming with the industry standards MPI and OpenMP. Finally, it describes the very basics of debugging, performance analysis, and optimization of parallel programs. The presentation closes with a summary of issues and open research topics of high-performance computing for the next decade namely heterogeneity, reliability, power consumption, and extreme concurrency.

This tutorial addresses managers, system administrators, and application programmers new to the field of high-performance computing (HPC) and interested in an introduction HPC systems and parallel programming. Basic knowledge in a sequential programming language like C, C++ or Fortran is helpful to better understand the examples presented in the 2nd half of the talk ("Introduction to parallel programming").

Training course "Scientific Visualization" @ HLRS

begin
20.Oct.2014 09:00
end
21.Oct.2014 15:30
venue
HLRS Stuttgart

This two day course is targeted at researchers with basic knowledge in numerical simulation, who would like to learn how to visualize their simulation results on the desktop but also in Augmented Reality and Virtual Environments. It will start with a short overview of scientific visualization, following a hands-on introduction to 3D desktop visualization with COVISE. On the second day, we will discuss how to build interactive 3D Models for Virtual Environments and how to set up an Augmented Reality visualization.

PATC training course "Introduction to PGAS for HPC" @ LRZ

begin
21.Oct.2014 09:00
end
22.Oct.2014 16:00
venue
LRZ Garching

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.

GASPI: Global Address Space Programming Interface

GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behavior GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com).

GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Coarray Fortran and UPC

Both coarrays (defined in the Fortran standard) and UPC (provided as a language extension) provide parallel language features that are based on the concept of the Partitioned Global Address Space. The main difference to GASPI is the tighter integration with the regular language semantics, especially the type system, which results in improved ease of use for some programming tasks. This course provides an introduction to both language extensions and includes a hands-on session during which participants can explore the concepts.

GPI-Space: High-Performance Computing Technology for data intense parallel applications

The modern HPC programmer faces the complexity of massive parallel computers and a growing diversity of hardware architectures. He finds himself programming increasingly more complex communication routines to orchestrate the data flow and the work load of his application, starting from scratch for every new application.

GPI-Space is a tool developed by Fraunhofer ITWM to separate the world of the domain specific knowledge from the world of computer science. The data flow is organized in form of a workflow using a high level description language. A workflow represents a Petri-Net of states and transitions. So-called data token are manipulated by those transitions as they migrate from one state to the next. To define these transitions the domain specific expert provides the calculation routines in a programming language of his choice. This makes GPI-Space a powerful and convenient tool for the parallelization of new applications, as well as for legacy code. GPI-Space is comprised of three building blocks: a workflow engine, a distributed runtime system and a virtual memory layer. The workflow engine provides features like dynamic load balancing, overlap of communication and computation and rescheduling in case of faulting transitions. Arbitrary application patterns are supported and not limited to a single pattern like e.g. Map&Reduce. Further GPI-Space is not restricted to batch processing only but supports processing on live data streams or any combination of both. The virtual memory layer forms a Partitioned Global Address Space (PGAS) allowing to store data in memory and providing highly efficient inter node communication routines based on the Global Address Space Programming Interface (GPI-2). This course will give an introduction into GPI-Space. It is held by GPI-Space experts from Fraunhofer ITWM and is targeted at professional developers as well as students. Participants will get an introduction into the basics of GPI-Space and its different components in an interactive way including live demos and hands-on examples. After the course they will have a good understanding of how GPI-Space can increase performance and efficiency of their own applications.

Training course "GPU Programming using CUDA" @ HLRS

begin
22.Oct.2014 12:30
end
24.Oct.2014 16:00
venue
HLRS Stuttgart

The course provides an introduction to the programming language CUDA which is used to write fast numeric algorithms for NVIDIA graphics processors (GPUs). Focus is on the basic usage of the language, the exploitation of the most important features of the device (massive parallel computation, shared memory, texture memory) and efficient usage of the hardware to maximize performance. An overview of the available development tools and the advanced features of the language is given.

Training course "C/C++ Workshop" @ LRZ

begin
03.Nov.2014 09:30
end
07.Nov.2014 12:30
venue
LRZ Garching

This five-day workshop gives an introduction to the C and C++ programming language. The first day of the course will be dedicated to C language, understanding basic compute concepts, programing and debugging. The two following days will introduce the students to object oriented programming in C++ language.

PATC training course "Industrial Services of the National HPC Centre Stuttgart" @ HLRS

begin
05.Nov.2014 14:00
end
05.Nov.2014 18:30
venue
HLRS Stuttgart

In order to permanently assure their competitiveness, enterprises and institutions are increasingly forced to deliver highest performance. Powerful computers, among the best in the world, can reliably support them in doing so.

This course is targeted towards decision makers in companies that would like to learn more about the advantages of using high performance computers in their field of business. They will be given extensive information about the properties and the capabilities of the computers as well as access methods and security aspects. In addition we present our comprehensive service offering - ranging from individual consulting via training courses to visualization. Real world examples will finally allow an interesting insight into our current activities.

Training course "Data analysis and data mining with Python" @ JSC

begin
17.Nov.2014 09:00
end
19.Nov.2014 16:30
venue
JSC, Jülich

Pandas, matplotlib, and scikit-learn make Python a powerful tool for data analysis, data mining, and visualization. All of these packages and many more can be combined with IPython to provide an interactive extensible environment.

In this course, we will explore matplotlib for visualization, pandas for time series analysis, and scikit-learn for data mining. We will use IPython to show how these and other tools can be used to facilitate interactive data analysis and exploration.

Day 1: Basic data analysis and visualization

  • Introduction to IPython for interactive data analysis.
  • pandas
  • NumPy
  • matplotlib

Day 2: Advanced data analysis visualization

  • pandas
  • Statsmodels
  • Mayavi2

Day 3: Advanced topics

  • Portable data formats
  • scikit-learn
  • PyMAFIA

This course is aimed at scientists who wish to explore the productivity gains made possible by Python for data analysis.

Training course "Introduction to the programming and usage of the supercomputing resources at Jülich" @ JSC

begin
27.Nov.2014 13:00
end
28.Nov.2014 16:30
venue
JSC, Jülich

Through the John von Neumann Institute for Computing, Research Centre Juelich provides two major high-performance computing resources to scientific user groups from throughout Germany and Europe. The aim of this course is to give new users of the supercomputing resources an introductory overview of the systems and their usage, and to help them in making efficient use of their allocated resources.

Training course "Introduction to OpenFOAM" @ LRZ

begin
01.Dec.2014 09:00
end
03.Dec.2014 17:00
venue
LRZ Garching

This three-day introductory course into OpenFOAM® is intended for new users who want to learn the basic concepts of its usage and want to know how to modify existing applications or add new functionalities.
Among others the course covers the topics:

  • Introduction to working with Linux
  • OpenFOAM® file structure and case setup
  • Setting up and running simulations
  • Evaluating and visualizing results with ParaView
  • Selection of numerical methods
  • Creating and converting meshes
  • Computing in parallel and acceleration on GPUs
  • Implementation of new methods
  • Advanced topics in OpenFOAM®

The course is held from 9:00 to 17:00 each day and will be comprised of approximately 4 hours of lectures and 3 hours of hands-on sessions, the language is English.

Training course "Introduction to parallel programming with MPI and OpenMP" @ JSC

begin
01.Dec.2014 09:00
end
03.Dec.2014 16:30
venue
JSC, Jülich

This course is given in German.

The focus is on programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. To facilitate efficient use of modern hardware architectures in high performance computing, this course also includes newest features of MPI-3.0 such as enhancements of the cluster and shared memory one-sided communication, sparse collective neighborhood communication, and a new Fortran interface.

This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in collaboration with HLRS. (Content Level: 70% for beginners, 30% advanced)

PATC training course "Node-level performance engineering" @ LRZ

begin
04.Dec.2014 09:00
end
05.Dec.2014 17:00
venue
LRZ Garching

This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.

Training course "Fortran for Scientific Computing" @ HLRS

begin
08.Dec.2014 10:00
end
12.Dec.2014 15:00
venue
HLRS Stuttgart

This course is dedicated for scientists and students to learn (sequential) programming with Fortran of scientific applications. The course teaches newest Fortran standards. Hands-on sessions will allow users to immediately test and understand the language constructs. This workshop provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.