How do I access HPC interactive node?
- Do you have a cluster account?
If not, first create one through the Cluster Account Request. When your cluster account is ready, please move on to the next action item. If you already have a cluster account, please go to the next action item.
- Fill in the required fields and click submit. You will receive email confirmation about the ticket and its reference number. The ticket status will be notified via the email
- When you receive a resolved status email for the HPC interactive node access request, please try to access the HPC interactive node with your UNC Charlotte's (NinerNET ID) account credentials
- Please make sure you are connected to UNC Charlotte's VPN - https://services.help.charlotte.edu/TDClient/33/Portal/KB/ArticleDet?ID=677
For Windows
For Putty: 
- Open the putty terminal, enter the HPC hostname, enter port as 22, select SSH connection and click Open

- Please login with your UNC Charlotte's (NinerNET ID) account username and password

- If your access is properly granted, you can successfully access the HPC Node.
- Repeat the same if you are using MobaXTerm. Click on Session, and add the HPC node and try to access it using your UNC Charlotte's (NinerNET ID) credentials. Use Session in the top left to include the node

- If you cannot login, please check if you have correctly entered the password or check if your UNC Charlotte's (NinerNET ID) account is working fine or if everything is good, please create a new ticket to URC explaining your access problem.
For Mac
Please access the HPC interactive node from your terminal prompt. Make sure you are connected to the UNC Charlotte VPN. https://services.help.charlotte.edu/TDClient/33/Portal/KB/ArticleDet?ID=677
ENABLE DUO AUTHENTICATION
URC utilizes Duo to provide an additional layer of security. Please follow these steps to enable DUO authentication - https://services.help.charlotte.edu/TDClient/33/Portal/KB/ArticleDet?ID=770
CREATE & RUN YOUR FIRST JOB
To run a job on ORION, the name of the primary SLURM partition, you must create what is known as a submit file.
The submit file lets the ORION scheduler know what resources your job requires (number of processors, amount of memory, walltime, etc.). The following sample submit file will be used to run your first job.
#!/bin/bash
#SBATCH --partition=Orion
#SBATCH --job-name=basic_slurm_job
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=1:00:00
echo "======================================================"
echo "Start Time : $(date)"
echo "Submit Dir : $SLURM_SUBMIT_DIR"
echo "Job ID/Name : $SLURM_JOBID / $SLURM_JOB_NAME"
echo "Num Tasks : $SLURM_NTASKS total [$SLURM_NNODES nodes @ $SLURM_CPUS_ON_NODE CPUs/node]"
echo "======================================================"
echo ""
cd $SLURM_SUBMIT_DIR
echo "Hello World! I ran on compute node $(/bin/hostname -s)"
echo ""
echo "======================================================"
echo "End Time : $(date)"
echo "======================================================"
|
Submit File Explanation
Let's take a quick look at each section of the submit file to understand the structure.
The first section contains the scheduler directives. The directives below are running the job in the BASH shell, in the Orion SLURM partition, setting the name of the job to basic_ slurm_job, requesting a single core on a single node, and asking for these resources for up to 1 hour:
#!/bin/bash
#SBATCH --partition=Orion
#SBATCH --job-name=basic_slurm_job
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --time=1:00:00
|
The second section prints out information, such as today's date, the directory the job resides in, the job's ID and name, as well as the total number of tasks and a node list:
echo "======================================================"
echo "Start Time : $(date)"
echo "Submit Dir : $SLURM_SUBMIT_DIR"
echo "Job ID/Name : $SLURM_JOBID / $SLURM_JOB_NAME"
echo "Num Tasks : $SLURM_NTASKS total [$SLURM_NNODES nodes @ $SLURM_CPUS_ON_NODE CPUs/node]"
echo "======================================================"
echo ""
|
- Actual Program for the Job
The third section is the portion of the submit file where your actual program will be specified. In this example, the job is just returning the directory that contains the SLURM submit script and printing out a message, as well as the compute node name to the output file:
cd $SLURM_SUBMIT_DIR
echo "Hello World! I ran on compute node $(/bin/hostname -s)"
|
The final section appends the job's completion time to the output file:
echo ""
echo "======================================================"
echo "End Time : $(date)"
echo "======================================================"
|
Now that you understand the sections of the submit file, let's submit it to the scheduler so you can see it run. The above submit file already exists on the system, so all you need to do is copy it:
$ mkdir ~/slurm_submit
$ cp /apps/usr/slurm_scripts/basic-submit.slurm ~/slurm_submit/
$ cd ~/slurm_submit/
|
(In Linux, the tilde ~ is a shortcut to your home directory)
Now, submit the basic-submit.slurm file to the scheduler:
$ sbatch basic-submit.slurm
Submitted batch job 242130
|
When you submit your job to the scheduler, the syntax of your submit file is checked. If there are no issues and your submit file is valid, the scheduler will assign the job an ID (in this case 242060) and will place the job in pending status (PD) while the scheduler works to reserve the resources requested for your job.
The more resources your job requests, the longer it may take for your job to move from pending (PD) to running (R). The resources requested for this job are light, so the job should begin running relatively quickly.
Check Job Status
You can check the status of your job by using the squeue command:
$ squeue -j 242130
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
242130 Orion basic_sl joeuser R 0:00 1 str-c28
|
Check Job Output File
Once your job is complete, you will have an output file that contains the output from your job's execution. In this example, given the job ID of 242130, the output file name will be slurm-242130.out. Looking in this file, you should see the following:
$ cat slurm-242130.out
======================================================
Start Time : Wed Dec 16 13:10:38 EST 2020
Submit Dir : /users/joeuser/slurm_submit
Job ID/Name : 242130 / basic_slurm_job
Num Tasks : 1 total [1 nodes @ 1 CPUs/node]
======================================================
Hello World! I ran on compute node str-c28
======================================================
End Time : Wed Dec 16 13:10:38 EST 2020
======================================================
|
KEEP LEARNING
Now that you have run your first job, you are ready to learn more about SLURM and the Orion SLURM partition by looking at the ORION & GPU (SLURM) User Notes.