Intel® MPI Library Reference Manual for Linux* OS
mpirun <options>
where <options>:= <mpiexec.hydra options> | [ <mpdboot options> ] <mpiexec options>
Use this command to launch an MPI job. The mpirun command uses Hydra* or MPD as the underlying process managers. Hydra is the default process manager. Set the I_MPI_PROCESS_MANAGER environment variable to change the default value.
The mpirun command detects if the MPI job is submitted from within a session allocated using a job scheduler like Torque*, PBS Pro*, LSF*, Parallelnavi* NQS*, SLURM*, Univa* Grid Engine*, or LoadLeveler*. The mpirun command extracts the host list from the respective environment and uses these nodes automatically according to the above scheme.
In this case, you do not need to create the mpd.hosts file. Allocate the session using a job scheduler installed on your system, and use the mpirun command inside this session to run your MPI job.
$ mpirun -n <# of processes> ./myprog
This command invokes the mpiexec.hydra command which uses the Hydra Process Manager by default.
Hydra* Specification
The mpirun command silently ignores the MPD specific options for compatibility reasons if you select Hydra* as the active process manager. The following table provides the list of silently ignored and unsupported MPD* options. Avoid these unsupported options if the Hydra* process manager is used.
Ignored mpdboot Options |
Ignored mpiexec Options |
Unsupported mpdboot Options |
Unsupported mpiexec Options |
---|---|---|---|
MPD* Specification
If you select MPD* as the process manager, the mpirun command automatically starts an independent ring of the mpd daemons, launches an MPI job, and shuts down the mpd ring upon job termination.
The first non-mpdboot option (including -n or -np) delimits the mpdboot and the mpiexec options. All options up to this point, excluding the delimiting option, are passed to the mpdboot command. All options from this point on, including the delimiting option, are passed to the mpiexec command.
All configuration files and environment variables applicable to the mpdboot and mpiexec commands also apply to the mpirun command.
The set of hosts is defined by the following rules, which are executed in this order:
All host names from the mpdboot host file (either mpd.hosts or the file specified by the -f option).
All host names returned by the mpdtrace command, if there is an mpd ring running.
The local host (a warning is issued in this case).
Control the environment cleanup after the mpirun command.
I_MPI_MPIRUN_CLEANUP=<value>
<value> |
Define the option |
enable | yes | on | 1 |
Enable the environment cleanup |
disable | no | off | 0 |
Disable the environment cleanup. This is the default value |
Use this environment variable to define whether to clean up the environment upon the mpirun completion. The cleanup includes the removal of the eventual stray service process, temporary files, and so on.
Select a process manager to be used by the mpirun command.
I_MPI_PROCESS_MANAGER=<value>
<value> |
String value |
hydra |
|
mpd |
Set this environment variable to select the process manager to be used by the mpirun command.
You can run each process manager directly by invoking the mpiexec command for MPD* and the mpiexec.hydra command for Hydra*.
Set this variable when running on a YARN*-managed cluster.
<value> |
Binary indicator |
enable | yes | on | 1 |
Enable YARN support |
disable | no | off | 0 |
Disable YARN support. This is the default value. |
Set this environment variable to make Hydra request resources from the YARN cluster manager prior to running an MPI job. Use this functionality only when you launch MPI on a YARN-managed cluster with Llama* installed (for example on cluster with the Cloudera* Distribution for Hadoop*).
Verify that YARN is configured to work properly with Llama (refer to the Llama documentation for the specific configuration details), and the Apache* Thrift* installation is available on the cluster.
Make sure Llama is started on the same host where YARN is running, or start it by issuing the following command as the llama user:
$ llama [--verbose &]
Make sure passwordless ssh is configured on the cluster.
Set the I_MPI_YARN environment variable:
$ export I_MPI_YARN=1
Either set I_MPI_THRIFT_PYTHON_LIB to point to Thrift's Python* modules, or add these modules explicitly to PYTHONPATH.
Set I_MPI_LLAMA_HOST/I_MPI_LLAMA_PORT to point to the Llama server host/port (by default, it is localhost:15000, so you can skip this step, if launching MPI from the same host where the Llama service is running).
Launch an MPI job as usual (do not specify hosts or machinefile explicitly - the resources will be automatically allocated by YARN):
$ mpirun –n 16 –ppn 2 [other IMPI options] <application>
The functionality is available with the Hydra process manager only.