Intel® MPI Library Reference Manual for Linux* OS
Set the host file to run the application.
I_MPI_HYDRA_HOST_FILE=<arg>
HYDRA_HOST_FILE=<arg>
<arg> |
String parameter |
<hostsfile> |
The full or relative path to the host file |
Set this environment variable to specify the hosts file.
Print out the debug information.
I_MPI_HYDRA_DEBUG=<arg>
<arg> |
Binary indicator |
enable | yes | on | 1 |
Turn on the debug output |
disable | no | off | 0 |
Turn off the debug output. This is the default value |
Set this environment variable to enable the debug mode.
Control the environment propagation.
I_MPI_HYDRA_ENV=<arg>
<arg> |
String parameter |
all |
Pass all environment to all MPI processes |
Set this environment variable to control the environment propagation to the MPI processes. By default, the entire launching node environment is passed to the MPI processes. Setting this variable also overwrites environment variables set by the remote shell.
I_MPI_JOB_TIMEOUT, I_MPI_MPIEXEC_TIMEOUT
(MPIEXEC_TIMEOUT)
Set the timeout period for mpiexec.hydra.
I_MPI_JOB_TIMEOUT=<timeout>
I_MPI_MPIEXEC_TIMEOUT=<timeout>
MPIEXEC_TIMEOUT=<timeout>
<timeout> |
|
<n> >= 0 |
The default timeout value is zero, which means no timeout. |
Set this environment variable to make mpiexec.hydra terminate the job in <timeout> seconds after its launch. The <timeout> value should be greater than zero. Otherwise the environment variable setting is ignored.
Set the I_MPI_JOB_TIMEOUT environment variable in the shell environment before executing the mpiexec.hydra command. Do not use the -genv or -env options to set the <timeout> value. Those options are used for passing environment variables to the MPI process environment.
(MPIEXEC_TIMEOUT_SIGNAL)
Define the signal to be sent when a job is terminated because of a timeout.
I_MPI_JOB_TIMEOUT_SIGNAL=<number>
MPIEXEC_TIMEOUT_SIGNAL=<number>
<number> |
Define signal number |
<n> > 0 |
The default value is 9 (SIGKILL) |
Define a signal number sent to stop the MPI job if the timeout period specified by the I_MPI_JOB_TIMEOUT environment variable expires. If you set a signal number unsupported by the system, the mpiexec.hydra operation prints a warning message and continues the task termination using the default signal number 9 (SIGKILL).
Define a signal to be sent to all processes when a job is terminated unexpectedly.
I_MPI_JOB_ABORT_SIGNAL=<number>
<number> |
Define signal number |
<n> > 0 |
The default value is 9 (SIGKILL) |
Set this environment variable to define a signal for task termination. If you set an unsupported signal number, mpiexec.hydra prints a warning message and uses the default signal 9 (SIGKILL).
(MPIEXEC_SIGNAL_PROPAGATION)
Control signal propagation.
I_MPI_JOB_SIGNAL_PROPAGATION=<arg>
MPIEXEC_SIGNAL_PROPAGATION=<arg>
<arg> |
Binary indicator |
enable | yes | on | 1 |
Turn on propagation |
disable | no | off | 0 |
Turn off propagation. This is the default value |
Set this environment variable to control propagation of the signals (SIGINT, SIGALRM, and SIGTERM). If you enable signal propagation, the received signal is sent to all processes of the MPI job. If you disable signal propagation, all processes of the MPI job are stopped with the default signal 9 (SIGKILL).
Set the bootstrap server.
I_MPI_HYDRA_BOOTSTRAP=<arg>
<arg> |
String parameter |
ssh |
Use secure shell. This is the default value |
rsh |
Use remote shell |
pdsh |
Use parallel distributed shell |
pbsdsh |
Use Torque* and PBS* pbsdsh command |
fork |
Use fork call |
slurm |
Use SLURM* srun command |
ll |
Use LoadLeveler* llspawn.stdio command |
lsf |
Use LSF* blaunch command |
sge |
Use Univa* Grid Engine* qrsh command |
jmi |
Use Job Manager Interface (tighter integration) |
Set this environment variable to specify the bootstrap server.
Set the I_MPI_HYDRA_BOOTSTRAP environment variable in the shell environment before executing the mpiexec.hydra command. Do not use the -env option to set the <arg> value. This option is used for passing environment variables to the MPI process environment.
Set the executable file to be used as a bootstrap server.
I_MPI_HYDRA_BOOTSTRAP_EXEC=<arg>
<arg> |
String parameter |
<executable> |
The name of the executable file |
Set this environment variable to specify the executable file to be used as a bootstrap server.
I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS
Set additional arguments for the bootstrap server.
I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=<arg>
<arg> |
String parameter |
<args> |
Additional bootstrap server arguments |
Set this environment variable to specify additional arguments for the bootstrap server.
I_MPI_HYDRA_BOOTSTRAP_AUTOFORK
Control the usage of fork call for local processes.
I_MPI_HYDRA_BOOTSTRAP_AUTOFORK = <arg>
<arg> |
String parameter |
enable | yes | on | 1 |
Use fork for the local processes. This is default value for ssh, rsh, ll, lsf, and pbsdsh bootstrap servers |
disable | no | off | 0 |
Do not use fork for the local processes. This is default value for the sge bootstrap server |
Set this environment variable to control usage of fork call for the local processes.
This option is not applicable to slurm, pdsh, persist, and jmi bootrstrap servers.
Use the resource management kernel.
I_MPI_HYDRA_RMK=<arg>
<arg> |
String parameter |
<rmk> |
Resource management kernel. The only supported value is pbs |
Set this environment variable to use the pbs resource management kernel. Intel® MPI Library only supports pbs.
Define the processing method for PMI messages.
I_MPI_HYDRA_PMI_CONNECT=<value>
<value> |
The algorithm to be used |
nocache |
Do not cache PMI messages |
cache |
Cache PMI messages on the local pmi_proxy management processes to minimize the number of PMI requests. Cached information is automatically propagated to child management processes |
lazy-cache |
cache mode with on-demand propagation. This is the default value. |
alltoall |
Information is automatically exchanged between all PMI_proxy before any get request can be done |
Use this environment variable to select the PMI messages processing method.
Define the default settings for the -perhost option in the mpiexec and mpiexec.hydra command.
<value> |
Define a value that is used for the -perhost option by default |
integer > 0 |
Exact value for the option |
all |
All logical CPUs on the node |
allcores |
All cores (physical CPUs) on the node. This is the default value. |
Set this environment variable to define the default setting for the -perhost option. The -perhost option implied with the respective value if the I_MPI_PERHOST environment variable is defined.
Choose the libraries to preload through the -trace option.
I_MPI_JOB_TRACE_LIBS=<arg>
MPIEXEC_TRACE_LIBS=<arg>
<arg> |
String parameter |
<list> |
Blank separated list of the libraries to preload. The default value is vt |
Set this environment variable to choose an alternative library for preloading through the -trace option.
Choose the libraries to preload through the -check_mpi option.
I_MPI_JOB_CHECK_LIBS=<arg>
<arg> |
String parameter |
<list> |
Blank separated list of the libraries to preload. The default value is vtmc |
Set this environment variable to choose an alternative library for preloading through the -check_mpi option.
Set the hierarchical branch count.
I_MPI_HYDRA_BRANCH_COUNT =<num>
<num> |
Number |
<n> >= 0 |
|
Set this environment variable to restrict the number of child management processes launched by the mpiexec.hydra operation or by each pmi_proxy management process.
Turn on/off aggregation of the PMI messages.
I_MPI_HYDRA_PMI_AGGREGATE=<arg>
<arg> |
Binary indicator |
enable | yes | on | 1 |
Enable PMI message aggregation. This is the default value |
disable | no | off | 0 |
Disable PMI message aggregation |
Set this environment variable to enable/disable aggregation of PMI messages .
Set the remote shell command to run GNU* debugger.
I_MPI_HYDRA_GDB_REMOTE_SHELL=<arg>
<arg> |
String parameter |
ssh |
Secure Shell (SSH). This is the default value |
rsh |
Remote shell (RSH) |
Set this environment variable to specify the remote shell command to run the GNU* debugger on the remote machines. You can use this environment variable to specify any shell command that has the same syntax as SSH or RSH.
Define the default setting of the JMI library.
I_MPI_HYDRA_JMI_LIBRARY=<value>
<value> |
Define a string value, name, or path to JMI dynamic library |
libjmi_slurm.so.1.1 | libjmi_pbs.so.1.0 |
Set the library name or full path to library name. The default value is libjmi.so |
Set this environment variable to define the JMI library to be loaded by the Hydra* processor manager. Set the full path to the library if the path is not mentioned in the LD_LIBRARY_PATH environment variable. If you use the mpirun command, you do not need to set this environment variable. The JMI library is automatically detected and set.
Set the network interface.
I_MPI_HYDRA_IFACE=<arg>
<arg> |
String parameter |
<network interface> |
The network interface configured in your system |
Set this environment variable to specify the network interface to use. For example, use -iface ib0, if the IP emulation of your InfiniBand* network is configured on ib0.
Set the demultiplexer (demux) mode.
I_MPI_HYDRA_DEMUX=<arg>
<arg> |
String parameter |
poll |
Set poll as the multiple I/O demultiplexer (demux) mode engine. This is the default value. |
select |
Set select as the multiple I/O demultiplexer (demux) mode engine |
Set this environment variable to specify the multiple I/O demux mode engine. The default value is poll.
I_MPI_HYDRA_CLEANUP
Control the creation of the default mpicleanup input file.
I_MPI_HYDRA_CLEANUP=<value>
<value> |
Binary indicator |
enable | yes | on | 1 |
Enable the mpicleanup input file creation |
disable | no | off | 0 |
Disable the mpicleanup input file creation. This is the default value |
Set the I_MPI_HYDRA_CLEANUP environment variable to create the input file for the mpicleanup utility.
Set the temporary directory.
I_MPI_TMPDIR=<arg>
<arg> |
String parameter |
<path> |
Set the temporary directory. The default value is /tmp |
Set this environment variable to specify the temporary directory to store the mpicleanup input file.
I_MPI_JOB_RESPECT_PROCESS_PLACEMENT
Specify whether to use the job scheduler provided process-per-node parameter.
I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=<arg>
<value> |
Binary indicator |
enable | yes | on | 1 |
Use the process placement provided by job scheduler. This is the default value |
disable | no | off | 0 |
Do not use the process placement provided by job scheduler |
If you set I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=enable, then Hydra process manager uses PPN provided by job scheduler.
If you set I_MPI_JOB_RESPECT_PROCESS_PLACEMENT=disable, then Hydra process manager uses PPN provided in command line option or using I_MPI_PERHOST environment variable.
Specify the tools to be launched for selected ranks.
I_MPI_GTOOL=“<command line for a tool 1>:<ranks set 1>[=exclusive][@arch 1]; <command line for a tool 2>:<ranks set 2>[=exclusive][@arch 2]; … ;<command line for a tool n>:<ranks set n>[=exclusive][@arch n]”
<arg> |
Parameters |
<command line for a tool> |
Specify a tool along with its parameters |
<rank set> |
Specify the ranks range which is involved in the tool execution. Ranks are separated with a comma or “-” symbol can be used for a set of contiguous ranks. NoteIf you specify incorrect rank index, a tool prints the corresponding warning and a tool continues working for valid ranks. |
[=exclusive] |
Specify this parameter to prevent launching a tool for more than one rank per host. This parameter is optional. |
[@arch] |
Specify the architecture on which a tool will applies. For a given <rank set>, if you specify this parameter, the tool is only applied for those ranks which have been allocated on hosts with the specific architecture. This parameter is optional. For the values of[@arch], see the argument table of I_MPI_PLATFORM for the detail value descriptions. If you launch the debugger on Intel® Xeon Phi™ coprocessor, setting [@arch] is required. See examples for details. |
Use this option to launch the tools such as Intel® VTune™ Amplifier XE, Valgrind*, and GNU* Debugger on specified ranks.
The following command examples demonstrate different scenarios of using the I_MPI_GTOOL environment variable:
Launch Intel® VTune™ Amplifier XE and Valgrind* by setting the I_MPI_GTOOL environment variable:
$ export I_MPI_GTOOL=“amplxe-cl -collect advanced-hotspots -analyze-system -r result1:5,3,7-9=exclusive@nhm;valgrind -log-file=log_%p :0,1,10-12@wsm”
$ mpiexec.hydra -n 16 a.out
Use this command to apply amplxe-cl to a rank with a minimal index allocated on the hosts in Intel® microarchitecture code name Nehalem from the given rank set. At the same time Valgrind* is applied for all ranks allocated on the hosts in Intel® microarchitecture code name Westmere from the specified rank set. Valgrind results are written to files with names log_<process ID>.
Launch GNU* Debugger (GDB*) by setting the I_MPI_GTOOL environment variable:
$ mpiexec.hydra -n 16 -genv I_MPI_GTOOL=“gdb:3,5,7-9” a.out
Use this command to apply gdb to the given rank set.
The options and the environment variable -gtool, -gtoolfile and I_MPI_GTOOL are mutually exclusive. The options -gtool and -gtoolfile are of the same priority. The first specified option in a command line is effective and the second specified option is ignored. Both -gtool and -gtoolfile options have higher priority than the I_MPI_GTOOL environment variable. Thus, use the I_MPI_GTOOL environment variable if you have not specified neither -gtool or -gtoolfile options in the mpiexec.hydra command.
I_MPI_HYDRA_USE_APP_TOPOLOGY=<value>
<value> |
The path to the native Intel MPI statistics file level 1 or higher |
If you define I_MPI_HYDRA_USE_APP_TOPOLOGY, hydra process manager (PM) performs rank placement on the assumption of transferred data from the stats file and cluster topology.
$ mpiexec.hydra –use-app-topology ./stats.txt <…> ./my_app
The hydra PM uses API of libmpitune.so the same way as mpitune_rank_placement in the static method and uses the resulting host list for rank assignment.
See the description of -use-app-topology and Topology Awareness Application Tuning for more details.