2. Resource descriptionUp to Table of Contents
|The Stallo cluster. Photo and copyright: Thilo Bubek|
|Peak performance||104 Teraflop/s
|# Nodes(1)||304 x HP BL460 gen8 blade servers
216 HP SL230 gen8 servers
| HP BL460 gen8 blade server
HP SL230 gen8 server
|# CPU's / # Cores||608 / 4864
432 / 4320
|2 / 16
2 / 20
|Processors||608 x 2.60 GHz Intel Xeon E5 2670
432 x 2.80 GHz Intel Xeon E5 2680v2
|2 x 2.60 GHz Intel Xeon E5 2670
2 x 2.80 GHz Intel Xeon E5 2680v2
|Total memory||19.7 TB
||32 GB (32 nodes with 128 GB)
||155.2 TB||500 GB (32 nodes with 600GB raid)
|Interconnect||Gigabit Ethernet + Infiniband(2)||Gigabit Ethernet + Infiniband(2)
||15 m x 1,3 m x 2 m
1) Stallo was expanded with 216 new nodes on Jan. 1st 2014.
2) All nodes in the cluster are connected with Gigabit Ethernet and QDR Infiniband.
Stallo - a Linux cluster
A quick and brief introduction to the general features of Linux Clusters / Stallo is available on the Metasenter page about Linux Clusters.
Since 2003, the HPC-group at has been one of five international development sites for the Linux operation system Rocks. Together with people in Singapore, Thailand, Korea and USA, we have developed a tool that has won international recognition, such as the price for "Most important software innovation" both in 2004 and 2005 in HPCWire. Now Rocks is a de-facto standard for cluster-managment in Norwegian supercomputing.
Stallo has a "three folded" file system:
- a global accessible home area: /home (64 TB)
- a global accessible work / scratch area: /global/work (1000 TB)
- a local accessible work / scratch area on the highmem nodes: /local/work, approx ~600 GB in a fast raid configuration. (use highmem queue to access these nodes, contact support for access to the highmem queue).
The home area is for "permanent" storage only, so please do not use it for temporary storage during production runs.
Jobs using the home area for scratch files while running may be killed without any warning.
Work / scratch areas
There are two different work / scratch areas available on Stallo:
- There are an 1000 TB global accessible work area on the cluster. This is accessible from both the login nodes and all the compute nodes as /global/work. This is the recommended work area, both because of size and performance!
- In addition, each compute node has a small work area of approximately 450 GB, only locally accessible on each node. This area is accessible as /local/work on each compute node. In general we do not recommend to use /local/work, both because of (the lack of) size and performance, however for some users this may be the best alternative.
These work areas should be used for all jobs running on Stallo.
After a job has finished old files should be deleted, this is to ensure that there are plenty of available space for new jobs. Files left on the work areas after a job has finished may be removed without any warning.
Backup is taken of all files stored on /home. A full dump backup is made every week and stored for 3 months.