Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
services:computing:hpc [2023/02/24 11:33] calucci [Queue System] |
services:computing:hpc [2024/03/06 13:30] (current) calucci hwperf |
||
---|---|---|---|
Line 36: | Line 36: | ||
* **''long1''** and **''long2''**: max 8 nodes, max 48h, max 6 concurrently running jobs per user | * **''long1''** and **''long2''**: max 8 nodes, max 48h, max 6 concurrently running jobs per user | ||
* **''gpu1''** and **''gpu2''**: max 4 nodes, max 12h | * **''gpu1''** and **''gpu2''**: max 4 nodes, max 12h | ||
- | * **''power9''**: max 2 nodes, max 24h | + | * **''power9''**: max 4 nodes, max 24h |
<note tip>Please note that hyperthreading is enabled on all nodes (it was disabled on old Ulysses). If you **do not** want to use hyperthreading, the ''%%--hint=nomultithread%%'' options to srun/sbatch will help. | <note tip>Please note that hyperthreading is enabled on all nodes (it was disabled on old Ulysses). If you **do not** want to use hyperthreading, the ''%%--hint=nomultithread%%'' options to srun/sbatch will help. | ||
Line 54: | Line 54: | ||
- | ====== Simplest possible job ====== | + | ===== Simplest possible job ===== |
This is a single-core job with default time and memory limits (1 hour and 0.5GB) | This is a single-core job with default time and memory limits (1 hour and 0.5GB) | ||
Line 74: | Line 74: | ||
<note warning>Please note that MPI jobs are only supported if they allocate all available core/threads on each node (so 20c/40t on *1 partitions and 32c/64t on *2 partitions. In this context, //not supported// means that jobs using fewer cores/threads than available may or may not work, depending on how cores //not// allocated to your job are used.</note> | <note warning>Please note that MPI jobs are only supported if they allocate all available core/threads on each node (so 20c/40t on *1 partitions and 32c/64t on *2 partitions. In this context, //not supported// means that jobs using fewer cores/threads than available may or may not work, depending on how cores //not// allocated to your job are used.</note> | ||
+ | |||
+ | ==== Access to hardware-based performance counters ==== | ||
+ | |||
+ | Access to hardware-based performance counters is disabled by default for security reasons. It can be enabled on request, only for node-exclusive jobs (i.e. for allocations where a single job is allowed to run on each node), use ''sbatch -C hwperf --exclusive ...'' | ||
+ | |||
===== Filesystem Usage and Backup Policy ===== | ===== Filesystem Usage and Backup Policy ===== | ||
Line 88: | Line 93: | ||
Daily backups are taken of ''/home'', while no backup is available for ''/scratch''. If you need to recover some deleted or damaged file from a backup set, please write to [[helpdesk-hpc@sissa.it]]. Daily backups are kept for one week, a weekly backup is kept for one month, and monthly backups are kept for one year. | Daily backups are taken of ''/home'', while no backup is available for ''/scratch''. If you need to recover some deleted or damaged file from a backup set, please write to [[helpdesk-hpc@sissa.it]]. Daily backups are kept for one week, a weekly backup is kept for one month, and monthly backups are kept for one year. | ||
- | Due to their inherent volatility, some directories can be excluded from the backup set. At this time, the list of excluded directories includes only one item, namely ''/home/$USER/.cache'' | + | Due to their inherent volatility, some directories can be excluded from the backup set. At this time, the list of excluded directories includes only ''/home/$USER/.cache'' and ''/home/$USER/.singularity/cache'' |
===== Job E-Mail ===== | ===== Job E-Mail ===== |