Wilson HPC Computing Facility


Cluster Status

New Users

User Authentication

Kerberos & SSH Troubleshooting

CPU Cluster: Building code - The Runtime Environment

CPU Cluster: Submitting jobs to the batch system

CPU Cluster: Hardware Details

PHI & GPU Cluster

KNL Cluster

All Clusters: Filesystem Details

All Clusters: Allocated Projects

The Accelerator Simulations Wilson cluster is a joint aquisition by the Accelerator Physics Center, Computing Sector, and Technical Division. This cluster is being used for development and testing of accelerator and radio frequency simulation codes. These calculations can only be done using tightly coupled parallel processing techniques. The nodes (described below) are all connected by high speed (double-data-rate) Infiniband network fabric. For maximum flexibility, the code uses the Open MPI package for controlling parallel calculations that can make use of any parallel network hardware.

In 2005 the cluster started as 20 dual-socket, single-core (2 cores/node, 40 cores total) Intel Xeon CPU based systems which delivered 0.13 TFlop/s Linpack performance. In 2010 the cluster was upgraded with the addition of 26 dual-socket, six-core (12 cores/node, 312 cores total) Intel Westmere CPU based systems which delivered 2.37 TFlop/s Linpack performance. In 2011 the current cluster (pictured below) was upgraded with the addition of 34 quad-socket, eight-core (32 cores/node, 1088 cores total) AMD Opteron CPU based systems which delivered 6.2 TFlop/s Linpack performance. With this latest upgrade, the original 20 dual-socket, single-core systems have been decommissioned. In early 2014 the cluster was upgraded with the addition of four Intel-based hosts with four Intel Xeon Phi 5110P accelerators per host and two Intel-based hosts with two NVIDIA Kepler GPUs per host. More information about the Phi and GPU hosts is available here.

updated 5 racks photo

The URL for this page is http://wilsonweb.fnal.gov/index.shtml

Contact: Ken Schumacher

Last modified: Thus June 2 15:20 CDT 2016