OAR scripts examples

1 - With Intel MPI libraries and Intel compilers

#!/bin/bash
#OAR -n nom_du_job_oar
#OAR -l nodes=1,walltime=48:00:00
#OAR -p nbcores=16 AND visu = 'NO'
#OAR -O result.%jobid%.log
#OAR -E result.%jobid%.log
#OAR --notify mail:user@toto.com

module purge
module load intel/12.1
module load intelmpi/4.0

ulimit -s unlimited

NSLOTS=$(cat $OAR_NODEFILE | wc -l)
NNODES=$(cat $OAR_NODEFILE | sort -u | wc -l)

mpdboot -r oarsh --totalnum=${NNODES} --file=$OAR_NODEFILE

mpdtrace -l

mpiexec -np $NSLOTS ./exe

mpdallexit

exit $?

Explanations :

  • These are OAR directives you want
  • Loading of environment modules, first purge for a clean environment then load modules
  • To avoid stack size issue, just put : "ulimit -s unlimited"
  • NSLOTS : number of cores
  • NNODES : number of nodes
  • mpdboot : launch of daemons mpd on all nodes corresponding to you OAR request
  • mpiexec : launch of MPI job (for Intel MPI, it is preferable to use mpiexec and not mpirun)
  • mpdallexit and exit : cleaning and stop of all daemons when calculation is finished

2 - With Intel compilers and OpenMPI libraries bound to Intel compilers

#!/bin/bash
#OAR -n nom_du_job_oar
#OAR -l nodes=1,walltime=48:00:00
#OAR -p nbcores=16 AND visu = 'NO'
#OAR -O result.%jobid%.log
#OAR -E result.%jobid%.log
#OAR --notify mail:user@toto.com

module purge
module load intel/12.1
module load openmpi/intel/1.6.2

ulimit -s unlimited

NSLOTS=$(cat $OAR_NODEFILE | wc -l)
PREF=$(dirname `which mpirun` | awk -F'/[^/]*$' '{print $1}')

mpirun --prefix $PREF -np $NSLOTS -machinefile $OAR_NODEFILE ./exe

exit $?

Explanations :

  • These are OAR directives you want
  • NSLOTS : number of cores
  • PREF : man mpirun with --prefix
  • -machinefile : permits to specify cpuset of machines obtained by batch scheduling
  • mpirun : with OpenMPI libraries, you must use mpirun and not mpiexec
  • Additional options :
    • -bind-to-core : binds process to core
    • -bind-to-socket : binds process to socket (processor)
    • -bysocket / -bynode / -bycore / -npernode / -report-bindings : see man mpirun

3 - With GNU compilers and OpenMPI libraries bound to GNU compilers

#!/bin/bash
#OAR -n nom_du_job_oar
#OAR -l nodes=1,walltime=48:00:00
#OAR -p nbcores=16 AND visu = 'NO'
#OAR -O result.%jobid%.log
#OAR -E result.%jobid%.log
#OAR --notify mail:user@toto.com

module purge
module load openmpi/1.6.2

ulimit -s unlimited

NSLOTS=$(cat $OAR_NODEFILE | wc -l)
PREF=$(dirname `which mpirun` | awk -F'/[^/]*$' '{print $1}')

mpirun --prefix $PREF -np $NSLOTS -machinefile $OAR_NODEFILE ./exe

exit $?

Same options as point 2

4 - With GNU compiler and Mvapich2 1.8 libraries bound to GNU compilers

#!/bin/bash
#OAR -n nom_du_job_oar
#OAR -l nodes=1,walltime=48:00:00
#OAR -p nbcores=16 AND visu = 'NO'
#OAR -O result.%jobid%.log
#OAR -E result.%jobid%.log
#OAR --notify mail:user@toto.com

module purge
module load mvapich2/1.8

ulimit -s unlimited

cat $OAR_NODEFILE | sort -u > hosts.tmp
THREADS=16  // To change according the number of cores you want

// First alternative
mpirun_rsh -hostfile hosts.tmp -n $THREADS ./exe
// Second alternative
mpiexec -launcher ssh -launcher-exec /usr/bin/oarsh -f hosts.tmp -n $THREADS ./exe

rm -rf hosts.tmp

exit $?

As always read man of commands !

 

Address

Université de Nice Sophia-Antipolis
D.S.I.
28 Avenue de Valrose
B.P. 2135
06103 NICE Cedex 02