University of Oregon

Systems - Documentation - Academic HPC Cluster

Audience
Faculty/Staff
Researcher
Student
GTF

Hardware Description

The Academic High-Performance Computing (HPC) Cluster is comprised of 20 64-bit nodes running CentOS 5.5:

  • 1 Frontend node is reserved for administrative tasks.
  • 1 Login node is available for direct user login and execution of interactive processes.
  • 2 PVFS2-IO nodes provide 700GB of HPC file storage via PVFS2 (presented at /scratch-pvfs).
  • 16 Compute nodes are available for batch job execution.

The Login and Compute nodes are dual quad-core hyperthreaded Intel Xeon E5520 (Nehalem/Gainestown) processors running at 2.27Ghz with 24GB of RAM and 72GB of local disk. Nodes are connected via three gigabit copper Ethernet networks, one of which is dedicated to storage traffic, and the remaining two for inter-process communication and PVFS2 IO.

Note that due to hyperthreading, Login and Compute nodes will appear to have 16 cores. If you are running a CPU-intensive task, you may wish to avoid running more than 8 threads or processes per node.

Logging In

The cluster is open to all UO students, faculty, and staff with valid Duck ID credentials. Make sure that your account is provisioned for shell access before attempting to log in.

Users may SSH to hpc.uoregon.edu or login.hpc.uoregon.edu. Either one will grant you access to the login node, from which you can run interactive tasks or submit jobs to be executed on the compute nodes.

User home directories are shared with other IS Unix systems such as shell.uoregon.edu and sftp.uoregon.edu. Standard home directory quotas apply.

Using the Cluster

The login node is intended to be used for execution of interactive applications such as Matlab or SAS, and for preparation and review of jobs executed on the compute nodes.

Any computationally intensive long-running processes should be submitted to the resource scheduling system for execution on a dedicated compute node. Resources on the login node are not controlled by the scheduling system, and may be highly contested. It is recommended that users use the resource scheduler whenever possible, as exclusive access to compute nodes is granted to these jobs.

Please see the following for detailed information on using the available software packages and submitting jobs to the resource scheduler:

Monitoring Resources

A status page showing system state, resource utilization levels, and queued jobs is available at http://hpc.uoregon.edu/