IntelĀ® MPI Library Reference Manual for Linux* OS
The IntelĀ® MPI Library provides support for the MPI-2 process model that allows creation and cooperative termination of processes after an MPI application has started. It provides the following:
a mechanism to establish communication between the newly created processes and the existing MPI application
a process attachment mechanism to establish communication between two existing MPI applications even when one of them does not spawn the other
The default placement of the spawned processes uses round robin scheduling. The first spawned process is placed after the last process of the parent group. A specific network fabric combination is selected using the usual fabrics selection algorithm (see I_MPI_FABRICS and I_MPI_FABRICS_LIST for details).
For example, to run a dynamic application, use the following commands:
$ mpirun -n 1 -gwdir <path_to_executable> -genv I_MPI_FABRICS shm:tcp <spawn_app>
In the example, the spawn_app spawns 4 dynamic processes. If the mpd.hosts contains the following information:
host1
host2
host3
host4
The original spawning process is placed on host1, while the dynamic processes are distributed as follows: 1 - on host2, 2 - on host3, 3 - on host4, and 4 - again on host1.
To run a client-server application, use the following commands on the intended server host:
$ mpirun -n 1 -genv I_MPI_FABRICS shm:dapl <server_app> > <port_name>
and use the following commands on the intended client hosts:
$ mpirun -n 1 -genv I_MPI_FABRICS shm:dapl <client_app> < <port_name>
To run a simple MPI_COMM_JOIN based application, use the following commands on the intended server host:
$ mpirun -n 1 -genv I_MPI_FABRICS shm:ofa <join_server_app> < <port_number>
$ mpirun -n 1 -genv I_MPI_FABRICS shm:ofa <join_client_app> < <port_number>