Here you will see how to deal with parallel profiling and explore the knobs and dials that make your code exploit the best possible performance, just like domain decomposition techniques and parallel I/O. Each of these sessions includes hands-on exercises to facilitate the understanding of the different constructs. Moreover, you will also obtain some insight on useful parallel libraries and routines for scientific code development.
In this course you will:
- Understand how to work with MPI and OpenMP with many examples from scientific applications
- Learn when and how to apply different parallelization strategies
- Experience how to develop and optimize code step by step for its use on a supercomputer
- Basic knowledge of Linux
- Basic knowledge of programming, particularly with C/C++ or Fortran
- Basic knowledge of parallel computing. No specific experience with supercomputing systems is necessary.
- Basic knowledge of MPI and OpenMP constructs
- Your own laptop with an up-to-date browser and a terminal emulator. The use of the operating systems Linux and macOS is preferred, but not mandatory. For Windows users we recommend to download MobaXterm (portable version) as terminal emulator.
Part of the materials from this course are kindly provided by the collaboration between PRACE and HLRS. This course is also sponsored and counts with the collaboration of expert application developers from the CompBioMed Center of Excellence.
Not familiar with MPI/OpenMP?
You may get the necessary knowledge right on time in the basic course on 7-8 June 2021.