With Intel-MPI I can pin the MPI processes started by mpirun to a certain cores on a node.
For example with 24 cores and Intel-MPI:
mpirun -np 12 -genv I_MPI_PIN_PROCESSOR_LIST=0-11 ./some.exe &
mpirun -np 12 -genv I_MPI_PIN_PROCESSOR_LIST=12-23 ./other.exe &
With OpenMPI there is the option --bind-to with one these arguments: none, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board.
I noticed that --bind-to socket binds process 0 to socket 0 and process 1 to socket 1, and so on. This is bad, since for best communication between some.exe processes, all of them should be on one socket and other.exe processes should be on the other socket.
Is there no equivalent pin option in OpenMPI?
mpirun --cpu-set 0,1,2,3,4,5,6,7,8,9,10,11 ./some.exeMCW rank 0 is not bound (or bound to all available processors) MCW rank 1 is not bound (or bound to all available processors)and so onmpirun --tag-output ... grep Cpus_allowed_list /proc/self/statusto confirm how tasks are pinned.--map-by coreoption. Use--map-by core --bind-to socketinstead if you only want to pin on sockets instead of cores.mpirun"instances" are independent and have no knowledge of each other, so yes, both jobs will end up time sharing. Consider using a resource manager such as Slurm in order to prevent this.