![]() ![]() ![]() The number of CPUs and amount of memory that can be used varies across compute nodes. There is 1 thread per CPU, so multi-threaded jobs require more than 1 CPU. The -cpus-per-task option requests the specified number of CPUs. This is a common use case as it enables basic parallel computing.Īn example job script: #!/bin/bash #SBATCH -account= #SBATCH -partition=main #SBATCH -nodes=1 #SBATCH -ntasks=1 #SBATCH -cpus-per-task=16 #SBATCH -mem=32G #SBATCH -time=1:00:00 module purge module load gcc/11.3.0 module load python/3.9.12 python script.py Multi-threaded jobsĪ multi-threaded (or multi-core or multi-process) job uses multiple CPUs (cores/threads) with shared memory on 1 compute node. There is 1 thread per CPU, so only 1 CPU is needed for a single-threaded job. This is the most basic job that can be submitted, but it does not fully utilize the compute resources available on CARC HPC clusters.Īn example job script: #!/bin/bash #SBATCH -account= #SBATCH -partition=main #SBATCH -nodes=1 #SBATCH -ntasks=1 #SBATCH -cpus-per-task=1 #SBATCH -mem=8G #SBATCH -time=1:00:00 module purge module load gcc/11.3.0 module load python/3.9.12 python script.py Single-threaded jobsĪ single-threaded (or single-core or serial) job uses 1 CPU (core/thread) on 1 compute node. On CARC clusters, compute nodes have two sockets with one physical multi-core processor per socket, such that 1 logical CPU = 1 core = 1 thread. Note: The Slurm option -cpus-per-task refers to logical CPUs. If you're not familiar with the Slurm job scheduler or submitting jobs, please see the guide for Running Jobs. The following sections offer Slurm job script templates and descriptions for various use cases on CARC high-performance computing (HPC) clusters. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |