- Bereitstellung
- Bereitstellung von Dokumentation
- Betrieb und Pflege der Lizenzserver
- Fachliche Beratung zur Nutzung
- Schulungen
- Unterstützung bei Nutzung auf HPC-Systemen
-
MPI
-
Message Passing Interface (MPI) is a standard that describes message exchange in parallel computing on distributed computer systems.
- Contact Person:
- Services:
MPI
... include 'mpif.h ... call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) if ( myid .eq. 0 ) then call MPI_SEND(.....) else call MPI_RECV(.....) endif ... call MPI_FINALIZE(ierr) ... |
At the SCC MPI can be used on all parallel computers. The implementations OpenMPI and Intel MPI are available.
MPI on the HPC systems of the SCC
Various programming concepts are used to write parallel applications and are consequently offered on the HPC systems. This includes the concepts for programming distributed memory computers as well as shared memory computers. On distributed-memory computers, the "message passing" programming model is most often used, i.e., the programmer must insert subroutine calls from a communication library into his program to transfer data from one task to another. In recent years, the "message passing interface" (MPI) has become the "de facto" standard. On HPC systems, MPI is part of the parallel environment. You will find information on the following topics:
- documentation about MPI and
Compiling and binding MPI programs
There are special compiler scripts to compile and bind MPI programs. These scripts start with theThese scripts start with the prefix mpi:
mpiccC ompile and bind C programs
mpiCCC ompile and bind C++ programs
mpif77 or mpif90 Compiling and binding Fortran programs
Further information about MPI can be found in the respective User Guide or at the manufacturer specific information about MPI on the website"Online Manuals".
Execution of parallel programs
Parallel programs can be started interactively or under the control of a batch system. When starting programs interactively, it is not possible to use a node other than the one on which you are logged in.
The syntax to start parallel applications is
mpirun[ mpirun_options ] program
or
mpirun[ mpirun_options ] -f appfile (when using OpenMPI)
or
mpirun [ mpirun_options ] exe1:exe2:... (when using Intel MPI)
both for interactive calls and for calls in shell scripts to run batch jobs. The mpirun_options are different for OpenMPI and Intel MPI.
To run a parallel application as a batch job, the shell script usually required for the sbatch command must contain the mpirun command with the application as input file.
Important for understanding: the -n # option is required when mpirun is called interactively, but is ignored when mpirun is called in batch jobs (the number of processors used in batch jobs is controlled by an option of the sbatch command). There is no option to specify the number of nodes!
Courses on MPI
- 2 courses of the SCC (within a lecture) about programming with MPI and OpenMP
- Practical course message coupled parallel computers (with MPI) of the chair of computer science for engineers and scientistsCourse materials
Documentations about MPI
- MPI: A Message Passing Interface Standard
- Message Passing Interface (MPI) FAQ
- MPI documentation from LLNL
Same text in English - Same text in English - Same text in English - Same text in English - Same text in English
MPI on HP XC Systems
Different programming concepts for writing parallel programs are used in high performance computing and are therefore supported on the HP XC systems. This includes concepts for programming of distributed memory systems as well as for shared memory systems. For distributed memory systems most often explicit message passing is used, i.e. the programmer has to introduce calls to a communication library to transfer data from one task to another one. As a de facto standard for this type of parallel programming the Message Passing Interface (MPI) has been established during the last years. On the HP XC systems MPI is part of the parallel environment. You will find here information on the following topics:
- Documentation and
Compiling and linking MPI Programs
There are special compiler scripts to compile and link MPI programs. These scripts start with the prefix mpi:
mpiccc
ompile and link C programsmpiCCc
ompile and link C++ programsmpif77 or mpif90
compile and link Fortran programs Further informations on MPI can be found in the respective User Guide or at the manufacturer specific informations on the website"Online Manuals".Execution of parallel Programs
Parallel programs can be started interactively or under control of the batch system. By launching programs interactively you won't be able to use another node than the one you are logged in.
The syntax to start a parallel application is
mpirun[ mpirun_options ] program
or
mpirun[ mpirun_options ] -f appfile
(using OpenMPI)or
mpirun [ mpirun_options ] exe1:exe2:
... (using Intel MPI)both for interactive calls and calls within shellscripts to execute batch jobs. The mpirun_options
are also the same for both modes.To start a parallel application as batch job the shellscript that is usually required by the command job_submit must contain the command mpirun
with the application as input file.Important for the understanding: the option -n # is required calling mpirun interactively but is ignored calling mpirun in batch jobs (the number of processors used in batch jobs is controlled by an option of the command sbatch
). There is no option to specify the number of nodes!Information in the web
Training
- Programming with MPI and OpenMP (In German)
- Practical training offered by the Institut of Informatics for Engineers and Natural Scientists(In German)