This is an old revision of the document!


High Performance Computing: Ulysses v2

The Ulysses cluster v2 is available for scientific computation to all SISSA users. If you have an active SISSA account, please write to helpdesk-hpc@sissa.it in order to have it enabled on Ulysses.

Ulysses v2 can be accessed via the login nodes at frontend1.hpc.sissa.it or frontend2.hpc.sissa.it from SISSA network or from SISSA VPN. More access options might be made available in due time.

Available compute nodes include:

  • (old) IBM nodes: Xeon E5-2680 v2 (2 sockets, 10 cores, 2 threads per core), most of them with 40GB RAM, a subset with 160GB, a much smaller subset with 320GB
  • (old) IBM GPU nodes: Xeon E5-2680 v2 (same as above), 64GB, 2 Tesla K20m
  • (new) HP nodes: Xeon E5-2683 v4 (2 sockets, 16 cores, 2 threads), 64GB RAM
  • (new) HP GPU nodes: same as above, with 2 Tesla P100

All nodes are connected to an Infiniband QDR fabric.

The software tree is the same you have on Linux workstations, with the same Lmod modules system (with the only exception of desktop-oriented software packages).

The queue system is now SLURM (https://slurm.schedmd.com/documentation.html), so if you were used to TORQUE on old Ulysses you will need to somewhat modify your job scripts.

Available partitions (or “queues” in TORQUE old-speak) include

  • regular1 (old nodes) and regular2 (new nodes): max 16 nodes, max 12h
  • wide1 and wide2: max 32 nodes, max 8h
  • long1 and long2: max 8 nodes, max 48h
  • gpu1 and gpu2: max 4 nodes, max 12h
Please note that hyperthreading is enabled on all nodes (it was disabled on old Ulysses). If you do not want to use hyperthreading, the --hint=nomultithread options to srun/sbatch will help.

Job scheduling is fair share-based, so the scheduling priority of your jobs depends on the waiting time in the queue AND on the amount of resources consumed by your other jobs. If you have urgent need to start a single job ASAP (e.g. for debugging), you can use the fastlane QoS that will give your job a substantial priority boost (to prevent abuse, only one job per user can use fastlane at a time, and you will “pay” for the priority boost with a lower priority for your subsequent jobs).

You should always use the --mem or --mem-per-cpu slurm options to specify the amount of memory needed by your job. This is especially important if your jobs doesn't use all available CPUs on a node (40 threads on IBM nodes, 64 on HP) and failing to do so will negatively impact the scheduling performance.

While you can submit a job using only #SBATCH --ntasks=... it is recommended that you explicitly request a number of nodes and tasks per node (usually, all tasks that can fit in a given node) for best performance. Otherwise, your job can end up “spread” on more nodes than necessary, while sharing resources with other unrelated jobs on each node. E.g. on regular1, -N2 -n80 will allocate all threads on 2 nodes, while -n80 can spread the on as many as 40 different nodes.

When reporting issues with Ulysses, please keep to the following guidelines:

  • write to helpdesk-hpc@sissa.it, not to personal email addresses: this way your enquiry will be seen by more than one person
  • please use a clear and descriptive subject for your message: “missing library libwhatever.so.12 from package whatever-libs” is OK, “missing software” is less useful, “Ulysses issues” is definitely not useful
  • please open one ticket for each issue; do not reply to old, closed tickets for unrelated issues
  • if you have some issues with submitting and running jobs, please:
    • include the job script and the exact submission command line you are using
    • include the job IDs of all jobs involved
    • if any modules are loaded before job submission and if your .bashrc or .bash_profile include anything but the system defaults, please state clearly so
    • please state clearly if you encountered the issue only once, or sporadically, or if it can be reproduced (and how)
    • if files in your home or scratch directories are involved and you want us to look at them, please include the complete path and an explicit permission for us to look into them
  • if you are asking for the installation of new software packages, please explain why a system-wide installation is needed
    • if you “only” want to make available to others a piece of software you already use, we can make room for you in /opt/contrib and help in the creation of a suitable module
    • we cannot install proprietary software unless a suitable license is provided
This website uses cookies for visitor traffic analysis. By using the website, you agree with storing the cookies on your computer.More information