Dates and Guidelines Gauss Centre for Supercomputing e.V.


When applying for all GCS computing time allocations, applicants must follow the guidelines for how to apply and be mindful of deadlines for applications.


GCS Large-Scale Calls

Large-scale projects are characterised by projects that require a large amount of core hours over longer periods of time. Projects are classified as "large-scale" if they require at least a combined 35 million core-hours per year on the systems available at the GCS member centres HLRS, JSC, and LRZ.

Large-scale projects go through a competitive review and resource allocation process established by the GCS. A "Call for Large-Scale Projects" is published by the Gauss Centre twice a year. Dates for closure of calls are usually at the end of winter and at the end of summer of each year. (more)

GCS Regular Calls

(A) Hazel Hen/Hawk and SuperMUC-NG:
Applications for GCS regular projects on the HLRS and LRZ HPC systems can be submitted at any time (so-calledrolling calls).

Applications for GCS regular projects on the JSC HPC system can be submitted twice a year at the same time as GCS large-scale projects (so-called cut-off calls).


The application procedures slightly differ for the three supercomputers and location sites. Therefore, please carefully read the following additional information on “How to Apply” for the individual GCS HPC systems: 

Important Notice on How to Apply for Computing Time on Hazel Hen
Important Notice on How to Apply for Computing Time on JUWELS
Important Notice on How to Apply for Computing Time on SuperMUC-NG

The application form can be found here

General Requirements for project applications:

Applications should be submitted in English.

Please structure the project application in the following way:

  • Outline of the scientific challenge and your approach towards its solution (with references)
  • Description of previous work in this field, exploratory studies including the experience and results obtained (with references)
  • Statement of the scientific goals of the research project
  • Description of the physical and mathematical methods employed in the project, including numerical algorithms
  • Detailed schedule of the project, in the case of an extension of the project, please give details about any changes of your plans.
  • Preliminary studies that show good scaling behaviour of the programs under production conditions (i.e. typical parameter sets and problem sizes of the planned project, including I/O).
  • A detailed description of the I/O behaviour of the application (amount and size of files generated during typical runs, I/O strategy used (MPI I/O, netCDF, HDF5, SIONlib, etc.) and an estimate of the storage requirements during the project (scratch diskspace needed, amount of data which needs to be transferred during the project to and from the system).
  • If the project is a continuation of a previous project, please primarily describe how this project differes from the previous year's application. Please upload the previous application document as supplementary material.
  • Please follow the form “Project Proposal” (link below) for a project description with respect to form, content and size.

 Template: Project Proposal for Tier 0/Tier1 HPC Access at GCS (PDF, 300 kB)
 Template: Project Proposal for Tier 0/Tier1 HPC Access at GCS (docx, 239 kB)
 Template: Project Proposal for Tier 0/Tier1 HPC Access at GCS as a TeX file (FileTypex-tar, 408 kB)

Please make sure that all data is complete and double check it.

The project description has to be uploaded via the application link as PDF file. Please do not include any supporting material in the project description. → If you wish to add supplemental material to the project description, please submit this information with a separate PDF file.

Acknowledgement Requirements

GCS Large-Scale Projects

For all projects supported by a computing time grant through a GCS Large-Scale Call, the following acknowledgment should be added to each project report as well to each paper submitted to external publications:

The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. ( for funding this project by providing computing time on the GCS Supercomputer XXX (name of HPC system) at XXX (High-Performance Computing Center Stuttgart/HLRS,, Leibniz Supercomputing Centre Garching/LRZ,, Jülich Supercomputing Centre,

GCS Regular projects

Please see the wording as listed here