High performance computing cluster

• 178 x HP SL230 modules in HP SL 6500 chassis
• 2 x login/management nodes: DL380
• 45,9 TFlops
• 180 nodes, 2,880 cores (each node has 2 x Xeon E5-2670 with 8 cores each)
• 168 x Standard memory nodes (64 GB RAM each) 6 x large memory nodes (256GB RAM each)
• All nodes capable of taking GPU accelerators but 4 nodes populated with NVIDIA Kepler
• Mellanox Infiniband FDR switches provide interconnects. All Ethernet connections 10 GBit.
• About 160 TB DDN fast disk for scratch supplied by VPAC.
• Managed by MOAB, with CentOS operating system and dozens of research applications covering many fields (separately sourced)

Also, there a dedicated 1 Gbit/sec link between VPAC and La Trobe University, which will be upgraded to 10 Gbit/sec soon.

A number of specialist nodes were also supplied for testing and developing GPGPU computing and other purposes.

To access the Cluster please refer to Tech Note 2 and link to https://www.vpac.org/accounts/applications/

Note that a lead researcher must register him/herself first, then the project is created, then participants (staff or students) can join the project. The username/password is specific to the cluster and is not the same as your La Trobe username/password.

For training (may be an external expense) please see http://www.vpac.org/training. There are many tutorials on line, please check them out first: http://www.vpac.org/training/tutorials

For a list of applications see https://www.vpac.org/HPCUsers/Applications/.

Any issues with using the service please contact the ICT Service Desk (x1500) or ict.servicedesk@latrobe.edu.au.

If you have any specific applications proboems you may contact VPAC directly on help@vpac.org.