At the heart of Advanced Research Computing is a range of state of the art high performance computers that all do slightly different things. You may find that the application which you plan to use has already been installed on some of them but not others. Alternatively, you may have a new application but are unsure of the best system to run it on. If in doubt, please contact the ARC staff.

The host name you require for connecting to each facility is shown in the table below. If you are unsure which service to use, please contact the ARC staff.

ARCUS (Phase B)
OCF / Lenovo -Haswell

Host name:

5792 (362*16) cores of Intel E5-2640v3 Haswell
64GiB * 246 nodes, 128GiB * 92 nodes, 256GiB * 9 nodes
64GiB and 2 * NVidia K40 GPU * 10 nodes
64GiB and 2 * NVidia K80 Dual-GPU * 5 nodes

CentOS Linux version 6

SLURM job scheduling

ARCUS (Phase A)
Dell Sandybridge

Host name:

1728 (108*16) cores of 2.0GHz Xeon SandyBridge/Ivybridge
64GiB * 80 nodes, 128GiB * 4 nodes RAM
20TB scratch disk
Red Hat Enterprise Linux 6

Torque/Maui job scheduling

SGI UV 100 Shared Memory System

Host name:

64 (8*8) cores of 2.67GHz Xeon/Westmere EX cores
5TB scratch disk
Commissioned March 2012
SuSE Linux 11

Torque/Maui job scheduling

18 nodes, each with 64GiB RAM, FDR Infiniband and either: 

 - 2 NVIDIA K40 GPUs on 4 nodes (2 Intel Xeon CPUs)
 - 2 NVIDIA K20m GPUs on 8 nodes (1 Intel Xeon CPU)
 - 2 NVIDIA K10 GPUs on 6 nodes (1 Intel Xeon CPU)

Host name:

2 GPU cards per node:

- 4 nodes with Nvidia Tesla K40 cards
- 8 nodes with Nvidia Tesla K20m cards
- 6 nodes with Nvidia Tesla K10 cards

Redhat Enterprise Linux 6

Slurm resource manager for job submission