How to submit jobs (linux)
qsub <arguments> <command/script to run>
Arguments
Argument | Use | Description | Default |
---|---|---|---|
-N <job name> | optional | It determines how the job is listed in the queue (see Monitoring) and the folder name to where output logfiles are saved. | Job<Job ID> |
-q <queue> | strongly recommended | It determines which queue to use. | first queue in list with free slots |
-l <resource>=<quantity>[,<resource>=<quantity>] | optional | Request particular resources. Queue will be selected accordingly. E.g.: h_cpu=<hh:mm:ss> for walltime; h_rss=<size> for RAM (e.g. 100M, 1G) | queue defaults; see queue list |
-o <path to file> | recommended | It determines where standard output should be redirected. | <home>/<jobname>.o[<job array number>] |
-e <path to file> | recommended | It determines where standard error output should be redirected. | <home>/<jobname>.e[<job array number>] |
-V | depends on the job | Pass environment (e.g. FSL configuration) to worker, so that the submitted script does not have to set it up for itself. | environment is not passed |
-v <variable>=<value>[,<variable>=<value>…] | depends on the job | Variables with the specified values will be available in the submitted script. | no variable is passed |
-t <start>-<end>:<incr> | depends on the job | Submit a job array and set SGE_TASK_ID in each instances. | not in use |
-b y[es]/n[o] | depends on the job | Allow submitting binary rather than script | not allowed |
It is strongly recommended to specify required resources either by selecting a queue with the appropriate resource set (-q) or by specifying the requirements manually (-l). For example, if you know that your job runs for 10 hours a requires 4GB RAM (typical for FreeSurfer recon-all), then you can either
- select the queue longq, which allocates runtime up to 24 hours and RAM up to 4GB:
qsub -q longq ...
or
- request 10 hours runtime and 4GB RAM:
qsub -l h_cpu=10:00:00,h_rss=4G ...
The job will be submitted to the longq anyway. While selecting the queue might be easier, you have to remember the configuration of the queues. Moreover, fine-tuning the requirements allows for a more efficient micromanaging of the resources.
Example
Run FreeSurfer segmentation from DICOM source (based on /MRIWork/Workshop/Material/1_Cluster/freesurfer_main.sh)
# Configuration source /usr/local/apps/psycapps/config/freesurfer_bash OUTDIR=/MRIWork/MRIWork09/cluster/FS; # Make sure to change this according to your account and that this folder exists # Submit job RAWDIR=/MRIWork/Workshop/Material/data/201706131237_19810218EIJO/Series_002_MPRAGE SUBJID=EGO qsub -q longq \ -o ${OUTDIR}/FS_${SUBJID}.out -e ${OUTDIR}/FS_${SUBJID}.err \ -V -v OUTDIR=${OUTDIR},RAWDIR=${RAWDIR},SUBJID=${SUBJID} \ /MRIWork/Workshop/Material/1_Cluster/freesurfer_script.sh;
See more in /MRIWork/Workshop/Material/1_Cluster
- Run FreeSurfer segmentation from DICOM source
- freesurfer_main.sh — example how to submit a single job (to run)
- freesurfer_script.sh — batch script containing the actual commands (to be submitted)
- Run FSL's BET from DICOM source
- bet_main.sh ———– example how to submit multiple jobs in a loop or in a job array (to run)
- bet_script.sh ——— batch script containing the actual commands (to be submitted in a loop)
- bet_script_array.sh – batch script containing the actual commands (to be submitted in a job array)