Loading...

Submitting Jobs at HPC Facility (Slurm)

  • Jobs on the HPC cluster must be submitted using Slurm scripts.
  • Under NO circumstances should you execute your job directly on the Head node or any compute node.
  • All the jobs are to be submitted via the scheduler, not directly.




Job Scripts

Model scripts for submitting jobs:

Serial job: Serial_script.sh
Parallel job: OpenMPI_script.sh, OpenMP_script.sh
GPU job: Cuda_script.sh
Gaussian: gaus_script.sh





Submitting Jobs

sbatch <your_script.sh>: Submit a job to the default partition
sbatch -p <partition_name> <your_script.sh>: Submit a job to a specific partition
srun --pty bash: Submit an interactive job (you will get a shell)
scancel <jobid>: Cancel a job





Checking Job Status

squeue -u <username>: List jobs for a particular user
squeue -p <partition_name>: List jobs in a specific partition





Other Job-Related Commands

scancel <jobid>: Stop a running job
scontrol show nodes: List status of all compute nodes
sinfo -s: List available partitions and node info





Instructions for Gaussian Runs

Gaussian users need access to the Gaussian package (request at hpc@iitmandi.ac.in).
Login with X forwarding:
ssh -Y username@10.8.1.17
Add to your ~/.bashrc:

export PATH=$PATH:/opt/soft/share/g09
export PATH=$PATH:/opt/soft/share/g09/gv
export GAUSS_EXEDIR=/opt/soft/share/g09
export g09root="/opt/soft/share/g09"
export GV_DIR="${g09root}/gv"
To access ghostview:
gv





Gaussian Submit Script Example

Number of threads in Gaussian input (%nprocshared) must match Slurm CPU request.
Example for 4 threads:

#SBATCH --cpus-per-task=4
#SBATCH --job-name=gaussian_job
#SBATCH --output=gaussian_%j.out
#SBATCH --partition=day


srun g09 input.com
Submit using:
sbatch gaus.sh





Module Load Instructions

Before running any job, load required environment modules. Examples:

module avail          # List all available modules





Mail your suggestions/comments to it_helpdesk@iitmandi.ac.in