STAMPEDE

Dell PowerEdge C8220 Cluster with Intel Xeon Phi coprocessors

Header Image

Stampede is one of the largest computing systems in the world for open science research. As an NSF petascale HPC acquisition, this system provides unprecedented computational capabilities to the national research community enabling breakthrough science that has never before been possible. The scale of Stampede delivers opportunities in computational science and technology research, from highly parallel algorithms to high-throughput computing, from scalable visualization to next generation programming languages.

Stampede system components are connected via a fat-tree, FDR InfiniBand interconnect. One hundred and sixty compute racks house compute nodes with dual, eight-core sockets, and feature the new Intel Xeon Phi coprocessors. Additional racks house login, I/O, big-memory, and general hardware management nodes. Each compute node is provisioned with local storage. A high-speed Lustre file system is backed by 76 I/O servers. Stampede also contains 16 large memory nodes, each with 1 TB of RAM and 32 cores, and 128 standard compute nodes, each with an NVIDIA Kepler K20 GPU, giving users access to large shared-memory computing and remote visualization capabilities, respectively. Users will interact with the system via multiple dedicated login servers, and a suite of high-speed data servers. The cluster resource manager for job submission and scheduling will be SLURM (Simple Linux Utility for Resource Management).

If you have trouble viewing this video, please visit TACC's YouTube page.

Any researcher at a U.S. institution can submit a proposal to request an allocation of cycles on the system. The request must describe the research, justify the need for such a powerful system to achieve new scientific discoveries, and demonstrate that the proposer's team has the expertise to utilize the resource effectively.

  • 90% of the system is dedicated to XSEDE
  • 10% of the system will be allocated at the discretion of the TACC Director in support of open science projects, including support of:
    • Researchers at UT Austin, UT System, and Texas Higher Education institutions; and
    • Members of TACC's Science & Technology Affiliates for Research (STAR) Program.

To submit a proposal to request an allocation, please visit the XSEDE website.

Researchers at UT Austin, UT System, and Texas higher education institutions, please contact Chris Hempel.

Thank you to everyone who has made Stampede possible: National Science Foundation, The University of Texas at Austin, Dell Inc., Intel Corporation, Mellanox Technologies, Clemson University, Cornell University, The Ohio State University, The University of Texas at El Paso, Indiana University, The University of Colorado at Boulder, and the Institute for Computational Engineering and Sciences (also at The University of Texas at Austin).

For more information about using Stampede, see the Stampede User Guide.

System Name: Stampede
Host Name: stampede.tacc.utexas.edu
Operating System: Linux (CentOS distribution)
Number of Nodes: 6,400
Number of Processing Cores: 102,400
Total Memory: 205TB
Peak Performance: 9.5 PF (2.2 PF from Xeon E5 processors, 7.3 PF from Xeon Phi coprocessors)
Total Disk: 14PB (shared)
1.6PB (local)