Dense Memory Cluster (DMC)

The DMC at the Alabama Supercomputer Center has 2360 CPU cores and 14 terabytes of distributed memory. Each compute node has a local disk (up to 3.5 terabytes of which are accessible as /tmp). Also attached to the DMC is a high performance GPFS storage cluster, which has 45 terabytes of high performance storage accessible as /scratch from each node. Home directories as well as third party applications use a separate GPFS volume and share 137 terabytes of storage.

The machine is physically configured as a cluster of 20, 24, 36, or 192 CPU core SMP boards.  Thirty-nine nodes have 2.5 GHz Intel 10-core Xeon Ivy Bridge processors and 128 gigabytes of memory.  Twelve nodes have 2.1 GHz 18-core Broadwell processors and 128 GB of memory. One node has 2.1 GHz Skylake-SP processors and 6 TB of memory.  Twenty-four nodes have 2.7 GHz 18-core Skylake-SP processors and 48 GB of memory.  One node has a 2.3 GHz Intel 12-core Haswell processors and 12 gigabytes of memory.  One node has 12-core 2.2 GHz Broadwell processors and 12 gigabytes of memory.  One node has a 1.3 GHz Intel 64-core Knights Landing preproduction processor and 94 gigabytes of memory.   The login node is an 8-core virtual machine emulating Ivy Bridge, but running on Haswell hardware.

The DMC has 18 NVIDIA GPU (Graphic Processing Unit) chips. These are a combination of: three DMC nodes configured with four Tesla K20m cards each, and two nodes have Tesla P100 cards with 16 GB of memory. These multicore GPU chips are similar to those in video cards, but are installed as math coprocessors. This can give significant performance advantages for software that has been adapted to use these processors. Thus the processing capacity of the DMC cluster is: 

Conventional processing capacity - 135 TFLOPs 
Single precision GPU capacity - 63 TFLOPs 
Double precision GPU capacity - 25 TFLOPs

Total DMC capacity - 198 TFLOPs