Personal tools
You are here: Home UiT files-uit Run script example for Stallo

Run script example for Stallo

by toj000 last modified Apr 22, 2013 09:27 AM

Run script example for Stallo

runscript-example.sh — text/x-sh, 7Kb

File contents

#!/bin/bash
#
#    If you want to use /bin/csh, you have to change things below!
#
#  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#  April 22th, 2013
#
#  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#   ____________________________________________________________
#  |                                                            | 
#  | Set initial information for the Queuing system             | 
#  | ==============================================             | 
#  |                                                            | 
#  | All PBS directives (the lines starting with #PBS) below    |
#  | are optional and can be omitted, resulting in the use of   | 
#  | the system defaults.                                       | 
#  |                                                            | 
#  | Please send comments and questions to support-uit@notur.no | 
#  |                                                            | 
#  |____________________________________________________________|
#
#  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#PBS -lnodes=2:ppn=16
#
#    Number of nodes and CPU's.
#    Number of nodes and CPU's per node (ppn) requested, here we ask for a
#    total of 32 CPU's, only necessary for parallel jobs.  This creates a
#    file, accessible through the environment variable $PBS_NODEFILE in
#    this script, that can be used by mpirun etc., see below.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#PBS -lwalltime=12:00:00
#
#    Expecting to run for up to 12 hours.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#PBS -lpmem=2000MB
#
#    Expecting to use 2000 megabytes of memory per process, this is
#    different from what we use on SMPs where we supply total memory for
#    the whole job.  Remark: According to the queuing system, most of the
#    nodes have 32000MB of memory which is slightly less than 32GB, so using
#    2000MB will let the queuing system pack 16 processes on one node,
#    while asking for 2GB will get you only 15 processes per node.
#
#    The pmem parameter is not enforced, it is only a help for the scheduler 
#    to select nodes with enough free memory.
#    If you want this to be a hard limit you should use pvmem=XMB instead, then
#    your application will be killed if it passes the limit.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#PBS -m abe
#
#    The queuing system will send an email on job Abort, Begin, End
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#PBS -A nn9999k
#
#    To specify which account, nn9999k, you want to use.
#    Only necessary if you have multiple accounts.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -





#   ______________________________________________________________
#  |                                                              | 
#  |                                                              |
#  |  Setting up and running your job on Stallo                   |
#  |  ========================================                    |
#  |                                                              |
#  |  We are now ready to begin running commands.                 |
#  |                                                              |
#  |  This job script is run as a regular shell script on the     |
#  |  first node assigned to this job, hereafter called Mother    |
#  |  Superior.                                                   |
#  |______________________________________________________________|
#
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

#    Setting up the work area.
#
#    On Stallo most users will and should use the global work area: /global/work
#    However, if you for some reason are using the local work area, /local/work, you
#    must comment out all lines in this script referring to the global area and 
#    uncomment the lines refering to the local work area (usually "pbsdsh -u ....")
#
#  
#    The line below creates a unique work area for this job in your private 
#    folder on the global work area.

workdir=/global/work/$USER/$PBS_JOBID
mkdir -p $workdir

#workdir=/local/work/$USER/$PBS_JOBID
#pbsdsh -u mkdir -p $workdir

#    The two lines above creates a unique work area on local disk. 
#    Never assume that /local/work/username exist. If a compute node crashes it is 
#    likely to reinstall itself and scrap the whole local work area.
#
#    The "pbsdsh" command get the hostlist for this job directly from the 
#    queuing system and using "-u" lets you execute the same command in 
#    parallel on all the nodes belonging to this job. It is much faster 
#    than looping over the host list and ssh-ing to every node.
#
#    If you for some reason needs the host list it is in the file pointed
#    to by $PBS_NODEFILE.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

cd $PBS_O_WORKDIR               

#    cd to the directory where the job was submitted from.  
#
#    Don't confuse this with the above mentioned work directory.
#    This isn't really necessary here, but can be convenient if you want
#    to copy many input files to the /local/work area on the compute nodes.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#

executable=$PBS_O_WORKDIR/mpi_test.x
cp $executable $workdir

#pbsdsh -u cp $executable $workdir

#    Copy the executable to every node. After this every compute node has 
#    its own copy of the executable, in this case mpi_test.x
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    Insert your job commands here, for instance:

cd $workdir 

mpirun ./mpi_test.x

#    Running your MPI job by launching it with "mpirun". This works for
#    when using OpenMPI. I you are using some other MPI distribution you may
#    have to use a different launcher.
#
#    Don't put the job commands in the background, e.g. by adding & at
#    the end.  Doing so will make the job escape the queuing system, 
#    create havoc with the scheduling and result in you getting angry
#    email from the sysadmins.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    After the job has finished it's time to clean up.
#    But be careful, if you are in the wrong catalogue you might shoot
#    yourself in the foot. Be sure to copy any important result to
#    your home catalogue before doing this. For instance:
#

cp data.d $PBS_O_WORKDIR

#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    Then remember to cd out of the catalogue before you remove it

cd /tmp
rm -rf $workdir
#pbsdsh -u rm -rf $workdir

#    NEVER use "rm -rf *" !!!
#
#    If you for some reason is in the wrong place, say, in your home
#    catalogue, you will lose data. 90% of all restores we do from backup
#    is because of people having this in their job scripts.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    The job ends when the script exit.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 

Document Actions