Getting Started

Once you have access to the HPC, you will be emailed an "IP address" for the login node of the cluster. You will log in to this IP address with your staff or student ID and your UTS Email password.

What software will I need to login?

You will need an "SSH client" to log in to an SSH terminal and copy files over SSH (secure shell).

Setting up MobaXterm

Click [New Session]
Choose a session type [SSH]
Host: xxx.xx.xx.xx    [tick]  <--- this is the IP address 
Specify username [your staff/student number]
[OK]

Logging in

Login by using "ssh" to the IP address with your username. Your username is your staff or student number, so for instance if your staff/student number is 999777 then from MobaXterm click the session that you setup for the cluster above.

If using Mac OSX or Linux:

$ ssh 999777@xxx.xx.xx.xx

Your home directory will be /shared/homes/username/

Once you can login read on this website the section Running your HPC Job

How do I transfer files?

  • We support file copies over SSH – e.g. sftp, rsync and scp. For Windows users MobaXterm has this functionality built in. You can also use a file transfer client such as WinSCP from https://winscp.net/eng/index.php.
  • For Linux or Mac OSX users rsync and scp will already be available on your system.
  • For users with research provided storage, we may be able to mount this directly onto the HPC environment.

What directories are available?

  • Your home directory is /shared/homes/xxxxxx where xxxxxx represents your UTS ID. Use this to setup your jobs, and data input/output for the middle term. This is not the fastest category of disk, so working files should not be located on here, but work here will be retained for the medium to long term depending on space availability.
  • Faster, local disk scratch space per node is available under /scratch/work/. This is for temporary storage and is not shared between nodes. This is useful if you need a temporary working space for data. Simply get your job script to create directories under here when it runs, and don’t forget to clean up afterwards.

There are example scripts that you can use to practice submitting some short test jobs in /shared/eresearch/.

$ mkdir jobs
$ cp -r /shared/eresearch/primes .
$ cd primes

Understanding the HPC Hardware Layout

Having an understanding of the hardware layout of the HPCC enables you to understand where your files are, what disks you should use for your data and how your program will run. Below is a schematic of the HPC layout.

Everyone logs into the "login node". From there you submit your job using a PBS job submission script. The "head node" manages all the jobs distributed over all the compute nodes, hpcnode1, hpcnode2 etc.

The /shared/homes/ directory is our Isilon storage system and is shared between all the nodes via the network. On each node it's labelled "net /shared/homes" in the diagram below. A /scratch directory exists on all the local drives on each node. This is labelled "local /scratch".

         +----------------+       +-----------+       +----------------+
         | login node     |       | head node |       | Isilon storage |  
         | /shared/homes  |       |           |       |                |
         | $              |       |           |       |                |
         +----------------+       +-----------+       +----------------+
                  |                     |                    |
                  | Network Connections |                    |
         +--------+------+------------+----------------+-----------+
         |               |            |                |           |
         |               |            |                |           |
         |               |            |                |           |
+-------------------+    |    +-------------------+    |    +-------------------+
| hpcnode1          |    |    | hpcnode2          |    |    | hpcnode3          |
| local /scratch    |    |    | local /scratch    |    |    | local /scratch    |
| net /shared/homes |    |    | net /shared/homes |    |    | net /shared/homes |  
+-------------------+    |    +-------------------+    |    +-------------------+
                         |                             |
               +-------------------+         +-------------------+   
               | hpcnode4          |         | hpcnode5          |  
               | local /scratch    |         | local /scratch    |  
               | net /shared/homes |         | net /shared/homes |  
               +-------------------+         +-------------------+