This is an old revision of the document!


High Performance Computing: Ulysses v2

The Ulysses cluster v2 is available for scientific computation to all SISSA users. If you have an active SISSA account, please write to helpdesk-hpc@sissa.it in order to have it enabled on Ulysses.

SSH access to Ulysses v2 is provided via the login nodes at frontend1.hpc.sissa.it or frontend2.hpc.sissa.it from SISSA network or from SISSA VPN. More access options might be made available in due time.

Available compute nodes include:

  • (old) IBM nodes: Xeon E5-2680 v2 (2 sockets, 10 cores, 2 threads per core), most of them with 40GB RAM, a subset with 160GB, a much smaller subset with 320GB
  • (old) IBM GPU nodes: Xeon E5-2680 v2 (same as above), 64GB, 2 Tesla K20m
  • (new) HP nodes: Xeon E5-2683 v4 (2 sockets, 16 cores, 2 threads), 64GB RAM
  • (new) HP GPU nodes: same as above, with 2 Tesla P100

All nodes are connected to an Infiniband QDR fabric.

The software tree is the same you have on Linux workstations, with the same Lmod modules system (with the only exception of desktop-oriented software packages).

A small number of POWER9-based nodes are also available (2 sockets, 16 cores, 4 threads per core; 256GB RAM) with 2 or 4 Tesla V100. Please note that you cannot run x86 code on POWER9. For an interactive shell on a P9 machine, please type p9login on frontend[12].

The queue system is now SLURM (https://slurm.schedmd.com/documentation.html), so if you were used to TORQUE on old Ulysses you will need to somewhat modify your job scripts.

Available partitions (or “queues” in TORQUE old-speak) include

  • regular1 (old nodes) and regular2 (new nodes): max 16 nodes, max 12h
  • wide1 and wide2: max 32 nodes, max 8h
  • long1 and long2: max 8 nodes, max 48h
  • gpu1 and gpu2: max 4 nodes, max 12h
  • power9: max 2 nodes, max 24h
Please note that hyperthreading is enabled on all nodes (it was disabled on old Ulysses). If you do not want to use hyperthreading, the --hint=nomultithread options to srun/sbatch will help.

Job scheduling is fair share-based, so the scheduling priority of your jobs depends on the waiting time in the queue AND on the amount of resources consumed by your other jobs. If you have urgent need to start a single job ASAP (e.g. for debugging), you can use the fastlane QoS that will give your job a substantial priority boost (to prevent abuse, only one job per user can use fastlane at a time, and you will “pay” for the priority boost with a lower priority for your subsequent jobs).

You should always use the --mem or --mem-per-cpu slurm options to specify the amount of memory needed by your job. This is especially important if your jobs doesn't use all available CPUs on a node (40 threads on IBM nodes, 64 on HP) and failing to do so will negatively impact the scheduling performance.

While you can submit a job using only #SBATCH --ntasks=... it is recommended that you explicitly request a number of nodes and tasks per node (usually, all tasks that can fit in a given node) for best performance. Otherwise, your job can end up “spread” on more nodes than necessary, while sharing resources with other unrelated jobs on each node. E.g. on regular1, -N2 -n80 will allocate all threads on 2 nodes, while -n80 can spread them on as many as 40 different nodes.

Simplest possible job

This is a single-core job with default time and memory limits (1 hour and 0.5GB)

$ cat myscript.sh
#!/bin/bash
#
#SBATCH -N1
#SBATCH -n1

echo "Hello, World!"

$ sbatch -p regular1 myscript.sh
Submitted batch job 730384

$ cat slurm-730384.out
Hello, World!
Please note that MPI jobs are only supported if they allocate all available core/threads on each node (so 20c/40t on *1 partitions and 32c/64t on *2 partitions. In this context, not supported means that jobs using fewer cores/threads than available may or may not work, depending on how cores not allocated to your job are used.

/home and /scratch are both general-purpose filesystems, they are based on the same hardware and provide the same performance level. When you first login on Ulysses, /home/$USER comes pre-populated with a small number of files that provide some reasonable configuration defaults. At the same time, /scratch/$USER is created for you where you have write permission.

Default quotas are 200GB on /home and 5TB on /scratch; short-term users (e.g. accounts created for workshops, summer schools and other events, that usually expire in a matter of weeks) are usually given smaller quotas in agreement with workshop organizers. On special and motivated request a larger quota can be granted: please write to helpdesk-hpc@sissa.it, Cc: your supervisor (if applicable) with your request; please note that the storage resource is limited, and not every request can be granted.

Frequently Asked Question: what does “quota” mean, exactly?
  • Quota is the traditional Unix word for and administrative limit imposed upon the usage of a certain resource. In this context, a “5TB quota” means that you will not be allowed to write more than that in a given filesystem, even if free space is available. It does not mean that those 5TB are reserved for you and are always available. At this time, filesystem quotas are enforced only on total used space; it is possible that in some future quotas are enabled on the number of files too.
  • A quota command is available and will give you a summary of your filesystem usage.
  • Please note that filesystem space is allocated in blocks, so the actual space usage can be different from the apparent file size. Currently the smallest possible block allocation is 4kB, so when computing quota usage file sizes are rounded up to the next 4kB multiple. “ls -s …” will report the actual allocated space for your files instead of (or along side with, depending on the command line) their apparent size.

Daily backups are taken of /home, while no backup is available for /scratch. If you need to recover some deleted or damaged file from a backup set, please write to helpdesk-hpc@sissa.it. Daily backups are kept for one week, a weekly backup is kept for one month, and monthly backups are kept for one year.

Due to their inherent volatility, some directories can be excluded from the backup set. At this time, the list of excluded directories includes only one item, namely /home/$USER/.cache

You can enable e-mail notifications at various stages of each job life with the –mail-type=TYPE option where TYPE can be a comma-separated list such as BEGIN,END,FAIL (more details are available in man sbatch). Notification recipient is by default your SISSA e-mail address, but you can select a different address with –mail-user. End-job notification includes a summary of consumed resources (CPU time and memory) as absolute values and as a percentage of requested resources. Please note that memory usage is sampled at 30 seconds intervals, so if your job is terminated by an out-of-memory condition arising from a very large failed allocation, the reported value can be grossly underestimated.

Energy Accounting

An experimental energy accounting system has been enabled on Ulysses, and energy usage estimates are reported in end-job notification. This is intended as a very rough estimate of the energy impact your job has, but is not accurate enough to be used for proper cost/energy/environmental accounting. Known limits of the energy accounting system in use include:

  • very small values are completely unreliable (and are not included at all in the end-job notification, so in case of very short or “mostly idle” job you will find no value at all)
  • only CPU and memory energy usage are considered, while energy consumed by other devices (network cards, disk controllers, service processors, power supplies) is not accounted for; energy used “outside” the compute nodes is not considered as well (this include network devices, external storage, UPS, HVAC), so even for a CPU-intensive job the “real” energy consumption can easily be twice as much than reported
  • on the other side, if your job doesn't use all available cores on each allocated node, energy consumption can be overestimated

You can enable the generation of periodic reports on your cluster usage that will be delivered to your email address on a daily, weekly and/or monthly base.

Each summary reports includes the number of jobs that completed their lifecycle during the selected interval along with the total amount of CPU*hours consumed and and estimation of total energy consumption; the number of jobs in each partition; and the final states of completed jobs (usually one of COMPLETED, TIMEOUT, CANCELLED, FAILED or OUT_OF_MEMORY). Optionally a detailed listing of all jobs can be included as an attachment (this will be a Zip-ed CSV file that can be further processed with your software of choice, but it is also human-readable).

To enable the reports with the default options (no daily report; weekly report with jobs detail and monthly report delivered to your_username@sissa.it) just create an empty .slurm_report file in your home directory on Ulysses:

touch $HOME/.slurm_report

If you need to tune some parameters (e.g. enable daily reports, enable/disable job details, change mail delivery address), please copy the default configuration file to your home

cp /usr/local/etc/slurm_report.ini $HOME/.slurm_report

and edit the local copy. If your account has no “@sissa.it” email, it is recommended that you edit the mailto= line.

Since 2022-05-12, Slurm reports are enabled for all new accounts; if you want to disable the report, just delete the config file $HOME/.slurm_report

How to read the detailed report

The detailed report, if requested, is attached as a Zip-compressed CSV file. You should be able to open / decompress it on any modern computing platform and the CSV file is both human- and machine-readable. Timestamps are in ISO 8601 format with implicit local time zone YYYY-MM-DDThh:mm:ss, e.g. 2022-03-04T09:30:00 is “half past nine in the morning of March 4th, 2022”. Four timestamps are provided for each job: submit (when the job was created with sbatch or similar commands), eligible (when the job becomes runnable, i.e. there are no conflicting conditions, like dependency on other jobs or exceeded user limits), start and end (when the job actually begins and ends execution).

When reporting issues with Ulysses, please keep to the following guidelines:

  • write to helpdesk-hpc@sissa.it, not to personal email addresses: this way your request will be seen by more than one person
  • please use a clear and descriptive subject for your message: “missing library libwhatever.so.12 from package whatever-libs” is OK, “missing software” is less useful, “Ulysses issues” is definitely not useful
  • please open one ticket for each issue; do not reply to old, closed tickets for unrelated issues
  • if you have some issues with submitting and running jobs, please:
    • include the job script and the exact submission command line you are using
    • include the job IDs of all jobs involved
    • if any modules are loaded before job submission and if your .bashrc or .bash_profile include anything but the system defaults, please state clearly so
    • please state clearly if you encountered the issue only once, or sporadically, or if it can be reproduced (and how)
    • if files in your home or scratch directories are involved and you want us to look at them, please include the complete path and an explicit permission for us to look into them
  • if you are asking for the installation of new software packages, please explain why a system-wide installation is needed
    • if you “only” want to make available to others a piece of software you already use, we can make room for you in /opt/contrib and help in the creation of a suitable module
    • we cannot install proprietary software unless a suitable license is provided
This website uses cookies for visitor traffic analysis. By using the website, you agree with storing the cookies on your computer.More information