Overview
This cluster uses the SLURM job scheduler. Each partition corresponds to a set of available compute resources and specific policies (such as maximum runtime, GPU or memory support, and access control).
Partition | Nodes (Total) | CPUs/Node | GPU Support | Max Walltime | Access |
---|---|---|---|---|---|
day | 90 | 8–64 | Some nodes | 1 day | All users |
week | 90 | 8–64 | Some nodes | 7 days | All users |
infinite | 90 | 8–64 | Some nodes | 30 days | All users |
serial | 33 | 8 | None | 60 days | All users |
gpuq | 12 | 8–12 | Yes | 30 days | All users |
lowpr | 65 | 4–72 | Some nodes | 15 days | All users |
privarko | 1 | 72 | None | Unlimited | arko group only |
privharsh | 1 | 64 | None | Unlimited | harsh group only |
privbm | 4 | 32–64 | None | Unlimited | bhaskar group only |
privse | 7 | 16–24 | None | Unlimited | se group only |
privtps | 11 | 12–24 | None | Unlimited | tulika group only |
privsbs | 9 | 4–20 | None | Unlimited | sbs group only |
privpr | 2 | 12–20 | Some nodes | Unlimited | pr group only |
privscee | 8 | 8–16 | Some nodes | Unlimited | scee group only |
privlqd | 9 | 12–24 | None | Unlimited | lqd group only |
privamp | 3 | 12–20 | None | Unlimited | hv group only |
privmt | 2 | 12 | None | Unlimited | mt group only |
privpku | 4 | 24 | None | Unlimited | pku group only |
privsp | 3 | 20 | None | Unlimited | sp group only |
privpkr | 1 | 20 | None | Unlimited | pkr group only |
How to Select a Partition:
Add this option in your job submission script:
#SBATCH --partition=
Example for the week partition:
sbatch --partition=week myjob.sh
For GPU jobs:
#SBATCH --gres=gpu:
Recommended Practices
For long jobs, ensure your scripts do checkpointing to avoid data loss if a job is killed when walltime expires or interrupted in "lowpr".
Always check node and GPU availability with sinfo before job submission.
Verify access rights for private partitions before use.