Model scripts for submitting jobs:
Serial job: Serial_script.sh
Parallel job: OpenMPI_script.sh, OpenMP_script.sh
GPU job: Cuda_script.sh
Gaussian: gaus_script.sh
sbatch <your_script.sh>
: Submit a job to the default partition
sbatch -p <partition_name> <your_script.sh>
: Submit a job to a specific partition
srun --pty bash
: Submit an interactive job (you will get a shell)
scancel <jobid>
: Cancel a job
squeue -u <username>
: List jobs for a particular user
squeue -p <partition_name>
: List jobs in a specific partition
scancel <jobid>
: Stop a running job
scontrol show nodes
: List status of all compute nodes
sinfo -s
: List available partitions and node info
Gaussian users need access to the Gaussian package (request at hpc@iitmandi.ac.in).
Login with X forwarding:
ssh -Y username@10.8.1.17
Add to your ~/.bashrc:
export PATH=$PATH:/opt/soft/share/g09 export PATH=$PATH:/opt/soft/share/g09/gv export GAUSS_EXEDIR=/opt/soft/share/g09 export g09root="/opt/soft/share/g09" export GV_DIR="${g09root}/gv"To access ghostview:
gv
Number of threads in Gaussian input (%nprocshared) must match Slurm CPU request.
Example for 4 threads:
#SBATCH --cpus-per-task=4 #SBATCH --job-name=gaussian_job #SBATCH --output=gaussian_%j.out #SBATCH --partition=day srun g09 input.comSubmit using:
sbatch gaus.sh
Before running any job, load required environment modules. Examples:
module avail # List all available modules
Mail your suggestions/comments to it_helpdesk@iitmandi.ac.in