Intel® MPI Library Reference Manual for Linux* OS
Use this option to display Intel® MPI Library version information.
Use this option to display the mpiexec help message.
where:
<arg> ={<dir_name>, <configuration_file>}.
Use this option to optimize the Intel® MPI Library performance using data collected by the mpitune utility.
If <arg> is not specified, the best-fit tuning options are selected for the given configuration. The default location of the configuration file is <installdir>/<arch>/etc directory. You can override this default location by explicitly stating: <arg>=<dir_name>. The provided configuration file is used if you set <arg>=<configuration_file>.
See Automatic Tuning Utility for more details.
If <arg> does not point to the configuration file, set the environment variable I_MPI_FABRICS. If I_MPI_FABRICS is not set, performance data will not be found and warnings will be printed.
Use this option to avoid running <executable> on the host where the mpiexec is launched. This option is useful for clusters that deploy a dedicated master node for starting the MPI jobs, and a set of compute nodes for running the actual MPI processes.
Use this option to place the indicated number of consecutive MPI processes on every host in a group using round robin scheduling. The total number of processes to start is controlled by the -n option.
The mpiexec command controls how the ranks of the processes are allocated to the nodes in the cluster. By default, mpiexec uses round-robin assignment of ranks to nodes, executing consecutive MPI processes on all processor cores.
To change this default behavior, set the number of processes per host by using the -perhost option, and set the total number of processes by using the -n option. See Local Options for details. The first <# of processes> indicated by the -perhost option is executed on the first host; the next <# of processes> is executed on the next host, and so on.
See also the I_MPI_PERHOST environment variable.
Use this option to execute consecutive MPI processes on different hosts using round robin scheduling. This option is equivalent to -perhost 1.
Use this option to place the indicated number of consecutive MPI processes on every host using round robin scheduling. This option is equivalent to -perhost <# of processes>.
Use this option to place the indicated number of consecutive MPI processes on every host using round robin scheduling. This option is equivalent to -perhost <# of processes>.
-machinefile <machine file>
Use this option to control the process placement through <machine file>. The total number of processes to start is controlled by the -n option.
A machine file is a list of fully qualified or short host names, one name per line. Blank lines and lines that start with # as the first character are ignored.
By repeating a host name, you place additional processes on this host. You can also use the following format to avoid repetition of the same host name: <host name>:<number of processes>. For example, the following machine file:
host1 host1 host2 host2 host3
is equivalent to:
host1:2 host2:2 host3
It is also possible to specify the network interface used for communication for each node: <host name>:<number of processes> [ifhn=<interface_host_name>].
The -machinefile, -ppn, -rr, and -perhost options are intended for process distribution. If used simultaneously, -machinefile takes precedence.
Use this option to specify the file <filename> that contains command-line options. Blank lines and lines that start with # as the first character are ignored. For example, the configuration file contains the following commands to run the executable files a.out and b.out using the shm:dapl fabric over host1 and host2 respectively:
-host host1 -env I_MPI_DEBUG 2 -env I_MPI_FABRICS shm:dapl -n 2 ./a.out
-host host2 -env I_MPI_DEBUG 2 -env I_MPI_FABRICS shm:dapl -n 2 ./b.out
To launch a MPI application according to the parameters above, use:
$ mpiexec -configfile <filename>
Use this option to apply the named local option <l-option> globally. See Local Options for a list of all local options. During the application startup, the default value is the -genvuser option.
Local options have higher priority than global options:
Use this option to set the <ENVVAR> environment variable to the specified <value> for all MPI processes.
Use this option to propagate all user environment variables to all MPI processes, with the exception of the following system environment variables: $HOSTNAME, $HOST, $HOSTTYPE, $MACHTYPE, $OSTYPE. This is the default setting.
Use this option to enable propagation of all environment variables to all MPI processes.
Use this option to suppress propagation of any environment variables to any MPI processes.
(SDK only) -trace [<profiling_library>] or -t [<profiling_library>]
Use this option to profile your MPI application using the indicated <profiling_library>. If the <profiling_library> is not mentioned, the default profiling library libVT.so is used.
Set the I_MPI_JOB_TRACE_LIBS environment variable to override the default profiling library.
It is not necessary to link your application against the profiling library before execution.
(SDK only) -check_mpi [<checking_library>]
Use this option to check your MPI application using the indicated <checking_library>. If <checking_library> is not mentioned, the default checking library libVTmc.so is used.
Set the I_MPI_JOB_CHECK_LIBS environment variable to override the default checking library.
It is not necessary to link your application against the checking library before execution.
Use this option to run <executable> under the TotalView* debugger. For example:
$ mpiexec -tv -n <# of processes> <executable>
See Environment Variables for information on how to select the TotalView* executable file.
Ensure the environment variable TVDSVRLAUNCHCMD=ssh because the TotalView* uses rsh by default.
The TotalView* debugger has a feature to displays the message queue state of your MPI program. To use the state display feature, do the following steps:
Run your <executable> with -tv option.
$ mpiexec -tv -n <# of processes> <executable>
Answer Yes to the question about stopping the Python* job.
To display the internal state of the MPI library textually, select Tools > Message Queue command. If you select the Process Window Tools > Message Queue Graph command, the TotalView* displays a window that shows a graph of the current message queue state. For more information, see TotalView*.
Use this option to attach the TotalView* debugger to existing <jobid>. For example:
$ mpiexec -tva <jobid>
Use this option to run <executable> for later attachment with the TotalView* debugger. For example:
$ mpiexec -tvsu -n <# of processes> <executable>
To debug the running Intel® MPI job, attach the TotalView* to the Python* instance that is running the mpiexec script.
Use this option to run <executable> under the GNU* debugger. For example:
$ mpiexec -gdb -n <# of processes> <executable>
Use this option to attach the GNU* debugger to the existing <jobid>. For example:
$ mpiexec -gdba <jobid>
Use this option to assign <alias> to the job.
Use this option to avoid intermingling of data output by the MPI processes. This option affects both the standard output and standard error streams.
For this option to work, the last line output by each process must end with the end-of-line (\n) character. Otherwise the application may stop responding.
Use this option to merge output lines.
Use this option to insert the MPI process rank at the beginning of all lines written to the standard output.
Use this option to direct standard input to the specified MPI processes.
<spec> |
Define MPI process ranks |
all |
Use all processes |
<l>,<m>,<n> |
Specify an exact list and use processes <l>, <m> and <n> only. The default value is zero |
<k>,<l>-<m>,<n> |
Specify a range and use processes <k>, <l> through <m>, and <n> |
Use this option to disable processing of the mpiexec configuration files described in the section Configuration Files.
Use this option to specify the network interface for communication with the local MPD daemon; where <interface/hostname> is an IP address or a hostname associated with the alternative network interface.
Use this option to list XML exit codes to the file <filename>.