High performance computing cluster
What is High Performance Computing (HPC)?
High Performance Computing (HPC) refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation. This allows analysis of very large data sets and the solving of complex problems in areas such science, engineering, health and medicine, or business and marketing. The terms High Performance Computing and Supercomputing are used interchangeably.
What areas of research would HPC benefit?
High Performance Computing (HPC) can be utilised in a wide range of research applications, including:
- Computational chemistry
- Environmental science & management
- Materials science
- Mechanical and structural engineering
- Molecular biology
How do I get access to HPC?
La Trobe researchers are able to access the following Intersect HPC facilities at no cost.
- Raijin: The largest supercomputer facility in Australia, managed by the National Computational Infrastructure (NCI). Raijin features 3,592 compute nodes powered by the Intel® Xeon® E5-2650 series (Sandy Bridge) with 2.6 GHz processor for a total of 57,472 cores delivering approximately 1.2 PFlops peak performance. It includes 40 PBytes of usable disk space. The NCI also maintains an extensive library of pre-installed software packages, many of which can be accessed at no cost. Users are able to request computing time on Raijin through Intersect, which has a 4% partner share of this facility.
There are also a number of other specialised external supercomputing facilities that can be accessed by researchers. Access to these systems often relies on applying for computing time under a competitive merit allocation scheme that is open to all Australian researchers. Some examples include:
(La Trobe staff intranet login required)
What HPC facility is right for me?
There are a number of factors that go into working out which HPC facility you should use. Often this comes down to whether or not the software you need is already available on one of the machines. Other considerations are how much memory you will need, how many cores you will need.
Do you need to use proprietary software?
|If you need to use proprietary software, it may be easier and more cost-effective to see if this software is already installed on one of the HPC facilities. Open source software can generally be installed on any HPC facility.|
Click here and use the dropdown to switch between the ‘NCI National Facility @ ANU’ for Raijin, and ‘INTERSECT’ for Orange. Be sure to check licensing conditions and ask if you are not sure.
|How many cores do you need to simultaneously access?||If you need to use lots of cores at the same time then you may find it easier to get a large allocation on a larger system.||Greater than >1023 cores per job available|
|How much memory do you need for your analysis?||If you need lots of RAM it may be best to run your analysis on a high memory node.|
32, 64 and 128 GB nodes available
|How long do you need to run your jobs for?||Some HPC facilities impose a maximum time limit that a job can be run for in order to ensure equitable use of the system. This limit is known as 'Wall Time'. If your jobs need to run for a long time then wall time limits may be important.||Wall time depends on number of cores: 1-225 cores: 48 hours 256-511 cores: 24 hours 512-1023 cores: 10 hours >1023 cores: 5 hours|
Is training available?
Please refer to the Digital Research training program for upcoming HPC courses.
What applications are available on Orange?
Click here to view the list of available applications on the Intersect HPC cluster.
Note that you will need to select Intersect in the drop down menu of HPC facilities.
What if I have problems with access?
Any issues with using the service please contact the ICT Service Desk (x1500)
If you have HPC issues that cannot be dealt with through the ICT Service Desk, please utilise the following escalation process: