Skip to main content
Back to Top
Apply Giving myAugusta
Resources for:
Students
Students Faculty & Staff Parents & Family Alumni Community
Augusta University Logo
  • Academics
    Graduates pose together in their regalia.
    Augusta University celebrates fall 2025 graduates »

    Academics Home

    Colleges & Schools

    Programs of Study
    • Degrees & Programs
    • Course Catalog
    • Course Schedule
    • Program Pathways
    • Academic Calendar
    • Online Programs
    • Accelerated Degree Programs
    Resources
    • Academic Success Center
    • Advising
    • Counseling Services
    • Honors Program
    • Libraries
    • Testing & Disability Services
    • Writing Center
    Outside the Classroom
    • Army ROTC
    • Study Abroad
    • Experiential Learning
    • First Year Experience
    • Center for Undergraduate Research
    • Career Services
    • Jags Live Well
  • Admissions
    Augusta University's jaguar mascot, Augustus, and two students hold their hands like a paw
    Augusta University awarded $1.3 million grant to expand student support »

    Admissions Home

    Visit Campus

    Request Information

    Apply to AU
    • First-Year Freshmen
    • Transfer Students
    • Dual Enrollment
    • Graduate Students
    • Medical College of Georgia
    • Dental College of Georgia
    Opportunities
    • Degree & Programs
    • Honors Program
    • Program Pathways
    • Military-Affiliated Students
    • New Student & Family Transistions
    • On-Campus Housing
    Financial Aid
    • Student Financial Aid
    • Net Price Calculator
    • Scholarships
    • Cost of Attendance
    • Apply for Federal Aid
  • Campus Life
    Two students pose with the university mascot, Augustus, showing their Jaguar pride.
    Augusta University awarded $1.3 million grant to expand student support »

    Campus Life Home

    Athletics

    Community
    • Army ROTC
    • Living-Learning Communities
    • Military & Veteran Services
    • Mentorship
    • New Student & Family Transistions
    • Jags 4 Jags Mentoring Program
    Campus Services
    • Dining Services
    • Roarstore
    • Housing
    • Student Health
    • Parking & Transportation
    • Jagcard
    Get Involved
    • Clubs & Organizations
    • Greek Life
    • Campus Recreation
    • Student Government
    • Jaguar Production Crew
    • Intramural Sports
  • Research
    Three men in suits stand in front of a Augusta University Medical College of Georgia backdrop and smile at the camera. The man in the middle is holding a plaque.
    MCG scientists investigate arthritis drug’s impact on Alzheimer’s disease »

    Research Home

    Opportunities
    • Undergraduate Research
    • Graduate & Postdoctoral Research
    • Clinical Trials
    • Core Laboratories
    • Innovation Commercialization
    Initiatives
    • Cancer
    • Cardiovascular
    • Immunology
    • Neuroscience
    • Aging
    Resources
    • Centers & Institutes
    • Ethics & Compliance
    • Institutional Review Board
    • Sponsored Programs
    • Tools for Researchers
  • About AU
    Five women and two men jump and raise their hands in celebration.
    AU contributed over $1.6 billion to Georgia’s economy in FY 2024 »

    About AU

    Jagwire News

    Leadership
    • President
    • Provost
    • Administration
    • Enrollment Student Affairs
    • Faculty Senate
    We are AU!
    • Our Mission
    • Working at AU
    • Traditions
    • History
    • Augusta, GA
    Resources
    • MyAugusta
    • Calendar of Events
    • Brand Guidelines
    • Portals
    • Faculty Directory
Resources For
  • Current Students
  • Faculty & Staff
  • Parents & Family
  • Alumni & Friends
Apply
Giving
MyAugusta
Trending Search Terms
  • D2L LMS
  • Email
  • Pounce
  • Calendar
  • Registrar
  • Housing
  • Academic Calendar
  • Financial Aid
  • Parking
  • Library
  • Human Resources
  • Information Technology

Discover

High Performance Computing Services

Providing researchers, faculty, and staff with the power to manipulate, analyze, and store massive amounts of data at high speed

High Performance Computing
  • About
  • Systems
  • Software
  • Access
  • Support
  • Augusta University
  • Information Technology
  • High Performance Computing
  • HPCS Systems

HPCS Systems

GPU Processing Cores

Modern NVIDIA GPUs come with three different types of processing cores:

NVIDIA CUDA Cores

Compute Unified Device Architecture (CUDA) is a parallel computing platform built on specialized hardware and the application programming interface (API) for the NVIDIA instruction set. CUDA cores are discreet processors, usually enumerated in the thousands on a single GPU chip that allows data to be processed in parallel across those cores.

CUDA cores are the workhorse component of general purpose GPU computing. They improve both the performance and cost effectiveness of parallel processing for myriad scientific workloads. When they are complimented by other specialty GPU core types, workload performance is further accelerated. 

NVIDIA Tensor Cores

Tensor is a data type that can represent nearly any type of ordered or unordered data. It can be thought of as a container in which multi-dimensional data sets can be stored. In the simplest terms it can be considered as an extension of a matrix. For example, matrices are two-dimensional structures containing numbers, but a tensor is a multi-dimensional set of numbers.

Tensor Cores enable mixed-precision computing, dynamically adapting calculations to accelerate throughput while preserving accuracy. The latest generation expands these speedups to a full range of workloads. As examples, up to 10X speedups in artificial intelligence (AI), machine learning (ML), and deep learning (DL) workloads, and 2.5X boosts for general HPC workloads is common.

Tensor cores can compute faster than CUDA cores. Primarily because CUDA cores perform one operation per clock cycle, whereas tensor cores can perform multiple operations per clock cycle. For ML and DL models, CUDA cores are not as effective as Tensor cores in terms of both cost and computation speed but they still augment their productivity.

NVIDIA Ray Tracing Cores

Ray tracing cores are exclusive to NVIDIA RTX graphics cards. RTX technology enables detail and accuracy for 3D designs and rendering and photorealistic physical world simulations including visual effects. Simulation and visualization capabilities produced are not limited to how something looks, but also how it behaves. The marriage of CUDA cores and APIs with RTX cores enables accurate modeling of the behavior of real-world objects and granular data visualization capabilities.

Cluster storage

The HOME file system (/home) is used for storage of job submission scripts, small applications, databases, and other user files. All users will be presented with a home directory when logging into any cluster system. Contents of user home directories will be the same across cluster systems.

All user home directories have a 20GB quota.

The SCRATCH file system (/scratch) is a capacious and performant parallel file system intended to be used for compute jobs. The SCRATCH file system is considered ephemeral storage.

The LOCAL SCRATCH file system (/lscratch or /tmp) is a reasonably performant but less capacious shared file system local to each compute node, also intended to be used for compute jobs. LOCAL SCRATCH file systems are also considered ephemeral storage.

The WORK file system (/work) is used to store shared data common across users or compute jobs. The WORK file system can be read by compute nodes in the cluster but they cannot write to it. As a result it is referred to as “near-line storage”.

All group work directories have a 500GB quota.

The PROJECT file system (/project) is for longer term user or shared group storage for active job data. Compute project data is considered active if it is needed for current or ongoing compute jobs. The PROJECT file system is not an archive for legacy job data and is referred to as "offline storage" as it is not accessible by compute nodes.

All group project directories have a 1TB quota.

Cluster Nodes

Job submission nodes

Job submission nodes allow users to authenticate to the cluster and are sometimes referred to as “login nodes”. They also provide applications required for scripting, submitting and managing batch compute jobs. Batch compute jobs are submitted to the cluster work queue. The user then waits for the job to be scheduled and run when the requested compute resources are available.

Users require an AU NetID that has been provisioned for AUHPCS, and a device configured for Duo two-factor authentication to access job submission nodes.

Data transfer nodes

Data transfer nodes provide access to user file systems in the cluster. Their role is to facilitate high speed transfer of data across those file systems within the cluster. These nodes can also be used for the transfer of data in and out of the cluster.

Users require an AU NetID that has been provisioned for AUHPCS, and a device configured for Duo two-factor authentication to access data transfer nodes.

Compute node profiles

General Intel compute nodes:

  • Model: Dell PowerEdge R440 Server
  • Processors: (2) Intel Xeon Silver 4210R (20 Cascade Lake cores) 2.4G, 13.75M Cache
  • Memory: 96GB DDR4-2400
  • Local scratch space: 960GB SSD
  • Number of nodes: 18

These nodes are candidates for bioinformatics, genomics, population science, mathematics, chemistry, and physics workloads with the most modest resource needs.

Middle memory Intel compute nodes:

  • Model: Dell PowerEdge R640 Server
  • Processors: (2) Intel Xeon Gold 5218R (40 Cascade Lake cores) 2.1G, 27.5M Cache
  • Memory: 768GB DDR4-2666
  • Local scratch space: 960GB SSD
  • Number of nodes: 8

These nodes are candidates for bioinformatics, genomics, population science, mathematics, chemistry, physics, and some modeling workloads with the additional resource needs. It is also likely these nodes would be suitable for pharmaceutical, molecular biology, and simulation workloads.

High memory Intel compute nodes:

  • Model: Dell PowerEdge R640 Server
  • Processors: (2) Intel Xeon Gold 5220R (48 Cascade Lake cores) 2.2G, 35.75M Cache
  • Memory: 1.53TB DDR4-2666
  • Local scratch space: 1.92TB SSD
  • Number of nodes: 2

These nodes are candidates for bioinformatics, genomics, population science, mathematics, chemistry, physics, modeling, pharmaceutical, molecular biology, and simulation workloads with the largest resource needs

NVIDIA Quadro RTX - Intel compute nodes:

  • Model: Dell PowerEdge R740XD Server
  • Processors: (2) Intel Xeon Gold 6246R (32 Cascade Lake cores) 3.4G, 35.75M Cache
  • Processors: (3) NVIDIA Quadro RTX 6000 (Turing, CUDA/Tensor 27,648/3,456 cores)
  • Memory: 768GB DDR4-2933 (CPU), 72GB GDDR6 (GPU)
  • Local scratch space: 1.92TB SSD
  • Number of nodes: 2

These nodes are candidates for data sciences, physics and life science modeling, artificial intelligence, inference, and simulation workloads with modest resource needs. These systems also include hardware features that can be used to accelerate complex simulations of the physical world such as particle or fluid dynamics for scientific and data visualization. They could also be used for film, video, and graphic rendering, or even special effects workloads.

NVIDIA Tesla T4 - Intel compute nodes:

  • Model: Dell PowerEdge R740XD Server
  • Processors: (2) Intel Xeon Gold 6246R (32 Cascade Lake cores) 3.4G, 35.75M Cache
  • Processors: (2) NVIDIA Tesla T4 (Turing, CUDA/Tensor 10,240/1,280 cores)
  • Memory: 768GB DDR4-2933 (CPU), 32GB GDDR6 (GPU)
  • Local scratch space: 1.92TB SSD
  • Number of nodes: 2

These nodes are candidates for mathematics, data sciences, artificial intelligence, inference, machine learning, deep learning, and simulation workloads with modest resource needs.

NVIDIA A100 – AMD compute node:

  • Model: NVIDIA DGX A100 P3687
  • Processors: (2) AMD EPYC 7742 (128 Rome cores) 2.25G, 256M Cache
  • Processors: (8) NVIDIA A100 Tensor Core (Ampere, CUDA/Tensor 55,296/3,456 cores)
  • Memory: 1TB DDR4-3200 (CPU), 320 GB, HBM2e (GPU)
  • Local scratch space: 15TB NVMe
  • Number of nodes: 1

This node provides the greatest end-to-end HPC platform performance in the cluster. It offers many enhancements that deliver significant speedups for largescale artificial intelligence, inference, deep learning, data analytics, and digital forensic workloads.

Software

Red Hat system utility and application support

Research Technology systems administration staff can assist with most Red Hat Linux operating system, application, and system utility support. Enterprise support for Red Hat Linux can be extended to professional services if needed. However, because there is often commonality across many Linux distributions, Research Technology may be able to help support application and utilities on other distributions as well.

Job submission script support

Research Technology HPC systems engineering and application staff can assist with many job submission script composition and troubleshooting tasks. However, it is important to understand that job submission script issues and debugging can be complex and support will often be a cooperative effort between users and staff. 

Software delivery

Software will be made available to the cluster using the following methodologies.

  • Installation by users in their home directories
  • Installation into the cluster as a LMOD environment module
  • Installation in an end user managed Singularity container
  • Installation on a HPCS specific network application server
  • Some externally hosted applications requiring GPU resources

Documentation

  • BASH scripting 
  • Environment modules (LMOD)  
  • Singularity containers  
  • Building apps from source  
  • SLURM workload manager  

Access

To access AUHPCS researchers, faculty, or staff must meet the following requirements:

  • Have an active AU NetID and a device configured for Duo two factor authentication.
  • Register as a Principal Investigator (PI) using the iLAB “High Performance Computing and Parallel Computing Services Core” page, or be added to an existing project by a registered PI.
  • Complete the required training course(s) for basic Linux competency (if required) and AUHPCS cluster concepts, use, and workflow.

Once access is granted for approved compute projects adherence to AUHPCS governance policies is a continuing requirement.

 

Get Help

Support

Consultation and assistance with HPC Services from the ITSS Research Technology group can be requested using the standard AU enterprise support services. Requests can be directed to us using the “Research HPC Services” assignment group.

Software Support

HPCS will make every effort to provide some level of support the scientific software installed in the cluster. However due to the open-source origins of many of these software packages, standard support is generally not available in the majority of cases. In these cases support will be a collaborative effort leveraging HPCS staff experience, HPCS staff research, and end user self-support. For purchased software with support HPCS staff will liaise with end users and vendors to support such software. Scientific software that requires funding for licenses and support that the university is not already licensed for will have to be purchased by the requesting department.

Online Service Portal
University Shield

Augusta University

1120 15th Street, Augusta, GA 30912

  •   Campus Maps
  •   Campus Contacts
  • A-Z Directory
  • Degrees & Programs
  • Employment
  • Accessibility
  • Accreditation
  • Campus Safety
  • Compliance Hotline
  • Human Trafficking Notice
  • Privacy Notices
  • Title IX / Sexual Misconduct
Apply Now Give Now

© 2025 Augusta University

Facebook Twitter LinkedIn Youtube Instagram
©