Description of 'edward' HPC cluster
The ITS Research Services HPC cluster consists of:
- Head node: this is the user login node and supports a full development environment.
- Management node: runs the batch manager, scheduler and used for node installation, NAT etc.
- File-server: this supports the user file-systems and applications
- 48 compute nodes: these do the actual work.
All the nodes have 16 cores and 32GB of memory and are connected with 10Gb Ethernet.
The discs on the scratch file-server are configured as RAID6 and have journalled file-systems to allow recovery from limited hardware failure, but are designed purely to support the computational workload. This space is not backed up.
Home Space will be hosted on the ITS Research data storage service, and backed up daily. HPC users are granted a 10Gb allocation here (more can be purchased) and primary data and research results should be stored here. The home space is not intended for live computational work, and data that is heavily accessed should be moved onto the scratch space before being used.
Software is installed in a common filesystem shared across the cluster. Multiple versions of the software can be installed, and the different versions can be loaded by using the Modules system.
The list below contains most of the installed software. If you need something else installed, you can request it by e-mailing Edward Support staff.
|GCC Compiler||4.4.6 4.6.2* 4.7.0|
|GIT Version Control||188.8.131.52|
|GSL GNU Scientific Library||1.15|
|Latent Gold||4.5-20120626 4.5-20120912|
|Matlab||R2010a R2011b R2012a*|
|OpenMPI||1.4.4 1.4.5* 1.5.5|
|SPM||2 5 8|
Versions marked with an asterisk are the default versions loaded when no version is specified, and multiple versions are installed.