Simplified Job Startup Command

mpirun

Syntax

mpirun <options>

where <options>:= <mpiexec.hydra options> | [ <mpdboot options> ] <mpiexec options>

Arguments

<mpiexec.hydra options>

mpiexec.hydra options as described in the mpiexec.hydra section. This is the default operation mode.

<mpdboot options>

mpdboot options as described in the mpdboot command description, except -n

<mpiexec options>

mpiexec options as described in the mpiexec section

Description

Use this command to launch an MPI job. The mpirun command uses Hydra* or MPD as the underlying process managers. Hydra is the default process manager. Set the I_MPI_PROCESS_MANAGER environment variable to change the default value.

The mpirun command detects if the MPI job is submitted from within a session allocated using a job scheduler like Torque*, PBS Pro*, LSF*, Parallelnavi* NQS*, SLURM*, Univa* Grid Engine*, or LoadLeveler*. The mpirun command extracts the host list from the respective environment and uses these nodes automatically according to the above scheme.

In this case, you do not need to create the mpd.hosts file. Allocate the session using a job scheduler installed on your system, and use the mpirun command inside this session to run your MPI job.

Example

$ mpirun -n <# of processes> ./myprog

This command invokes the mpiexec.hydra command which uses the Hydra Process Manager by default.

Hydra* Specification

The mpirun command silently ignores the MPD specific options for compatibility reasons if you select Hydra* as the active process manager. The following table provides the list of silently ignored and unsupported MPD* options. Avoid these unsupported options if the Hydra* process manager is used.

Ignored mpdboot Options

Ignored mpiexec Options

Unsupported mpdboot Options

Unsupported mpiexec Options

--loccons

-[g]envuser

--user=<user> | -u <user>

-a

--remcons

-[g]envexcl

--mpd=<mpdcmd> | -m <mpdcmd>

 

--ordered | -o

-m

--shell | -s

 

--maxbranch=<maxbranch> | -b <maxbranch>

-ifhn <interface/hostname>

-1

 

--parallel-startup | -p

-ecfn <filename>

--ncpus=<ncpus>

 
 

-tvsu

   

MPD* Specification

If you select MPD* as the process manager, the mpirun command automatically starts an independent ring of the mpd daemons, launches an MPI job, and shuts down the mpd ring upon job termination.

The first non-mpdboot option (including -n or -np) delimits the mpdboot and the mpiexec options. All options up to this point, excluding the delimiting option, are passed to the mpdboot command. All options from this point on, including the delimiting option, are passed to the mpiexec command.

All configuration files and environment variables applicable to the mpdboot and mpiexec commands also apply to the mpirun command.

The set of hosts is defined by the following rules, which are executed in this order:

  1. All host names from the mpdboot host file (either mpd.hosts or the file specified by the -f option).

  2. All host names returned by the mpdtrace command, if there is an mpd ring running.

  3. The local host (a warning is issued in this case).

I_MPI_MPIRUN_CLEANUP

Control the environment cleanup after the mpirun command.

Syntax

I_MPI_MPIRUN_CLEANUP=<value>

Arguments

<value>

Define the option

enable | yes | on | 1

Enable the environment cleanup

disable | no | off | 0

Disable the environment cleanup. This is the default value

Description

Use this environment variable to define whether to clean up the environment upon the mpirun completion. The cleanup includes the removal of the eventual stray service process, temporary files, and so on.

I_MPI_PROCESS_MANAGER

Select a process manager to be used by the mpirun command.

Syntax

I_MPI_PROCESS_MANAGER=<value>

Arguments

<value>

String value

hydra

Use Hydra* process manager. This is the default value

mpd

Use MPD* process manager

Description

Set this environment variable to select the process manager to be used by the mpirun command.

Note

You can run each process manager directly by invoking the mpiexec command for MPD* and the mpiexec.hydra command for Hydra*.

I_MPI_YARN

Set this variable when running on a YARN*-managed cluster.

Arguments

<value>

Binary indicator

enable | yes | on | 1

Enable YARN support

disable | no | off | 0

Disable YARN support. This is the default value.

Description

Set this environment variable to make Hydra request resources from the YARN cluster manager prior to running an MPI job. Use this functionality only when you launch MPI on a YARN-managed cluster with Llama* installed (for example on cluster with the Cloudera* Distribution for Hadoop*).

Usage Example

Verify that YARN is configured to work properly with Llama (refer to the Llama documentation for the specific configuration details), and the Apache* Thrift* installation is available on the cluster.

  1. Make sure Llama is started on the same host where YARN is running, or start it by issuing the following command as the llama user:

    $ llama [--verbose &]

  2. Make sure passwordless ssh is configured on the cluster.

  3. Set the I_MPI_YARN environment variable:

    $ export I_MPI_YARN=1

  4. Either set I_MPI_THRIFT_PYTHON_LIB to point to Thrift's Python* modules, or add these modules explicitly to PYTHONPATH.

  5. Set I_MPI_LLAMA_HOST/I_MPI_LLAMA_PORT to point to the Llama server host/port (by default, it is localhost:15000, so you can skip this step, if launching MPI from the same host where the Llama service is running).

  6. Launch an MPI job as usual (do not specify hosts or machinefile explicitly - the resources will be automatically allocated by YARN):

    $ mpirun –n 16 –ppn 2 [other IMPI options] <application>

Note

The functionality is available with the Hydra process manager only.