The following is a more technical description of the previous and current computing systems at the Alabama Supercomputer Center. In reading this discussion, please note the units behind the numbers. Over the years, measurement of data capacities has gone from megabytes to gigabytes to terabytes, and measurements of processing ability have shifted from MFLOP to GFLOP to TFLOP.

The first supercomputer at ASC was the Cray X-MP (Figure 10.2). This computer still sits on the floor, purely as a museum piece. It has one plexiglass panel, so that students touring the center can see the mass of hand-wired connections inside.

The Cray X-MP/24 (circa 1987) was a 64 bit computer with two central processing units (CPUs), eight vector processing units, and 32 megabytes of memory. This gave a maximum result rate of 117 MFLOPs (million floating point operations per second). It had a 256 megabyte solid state disk for temporary working files and 15 gigabytes of hard disk space. The operating system on the X-MP was UNICOS, Cray's implementation of the Unix operating system.

Users would access the Cray X-MP by using dumb terminals (a keyboard and screen with no memory or computing ability) that were either directly connected to or dialing via a phone modem into a local server. The local servers on each campus

were minicomputers, either VAX 8250 systems or IBM 9370 computers at some of the smaller sites. Initially ASA provided four software packages and four libraries of math and graphics functions for people who wrote their own software. Most of the "graphics" consisted of creating files that could be sent to a pen plotter or printer.

In the summer of 1992, ASA added a UniTree Mass Storage Subsystem (MSS). This was an early version of a network file system (NFS). It was connected to both the Cray X-MP and the nCUBE. It consisted of 13.7 gigabytes of disk storage attached to an IBM RS/6000 Model 530H workstation and an autoloader tape robot. The tape robot could hold 54 8mm tapes, each holding 5 gigabytes of data.

The nCUBE 2 Model 10 (circa 1991) had 128 CPUs each of which was on a single node (a small computer with its own memory and operating system). These nodes were arranged in a hypercube architecture. It had 464 megabytes of memory, which was distributed with some nodes having 16 megabytes, some nodes having 4 megabytes and most of the nodes having 1 megabyte. There were 11 gigabytes of internal disk space. The nCUBE was accessed via a front end computer, which was a Sun 4/470 workstation running Unix. In contrast to the X-MP which had two powerful CPUs, the nCUBE had 128 weak CPUs. In total the nCUBE was actually about 5% less powerful than the X-MP. When it was decommissioned, the nCUBE was donated to Auburn University.

The Cray C90 was a model C94A/264 system installed in 1994. The Cray C90 had two processors and eight vector units, which gave a maximum result rate of 960 MFLOPs. It had 512 megabytes of memory, and a 256 megabyte solid state disk. The attached disk array had 50 gigabytes of storage. At this time ASA provided 50 software packages, compilers, and math libraries. There were a little over 200 users on the system. The Cray C90 used the UNICOS operating system.

Along with the Cray C90, a StorageTek 4400 tape silo was put in place for data archival. The StorageTek used 1/2 inch 18-track tapes, which were automatically moved to the tape drives by a PowderHorn robot arm. It could store 2.1 terabytes of data. When the StorageTek was decommissioned, a buyer could not be found... its 18 foot, octagonal housing is now a lawn mower shed belonging to one of the system administrators.

The Cray SV1 was installed in 1999. This was the first SV1 delivered to a customer. It was initially delivered with J90 CPUs, which were replaced with SV1 CPUs six months later. The Cray SV1 had sixteen CPUs and 32 vector processing units which gave a maximum result rate of 1.2 GFLOPs (billion floating point operations per second). It also had 16 gigabytes of memory, and 480 gigabytes of RAID-3 fibre channel disk storage. It was connected to the network via a fibre distributed data interface (FDDI) ring. At this time, ASC had some smaller servers for visualizationwork which were a Sun Sparcstation 10 and a SGI Indigo 2. The SV1 was the last UNICOS based system at the Alabama Supercomputer Center. When it was decommissioned in 2004, the Cray SV1 was sold to a private Cray museum.

The SGI Altix system was first installed in 2004. It started out as a system with 56 CPU cores and to date has been expanded to 228 CPU cores and 1.5 terabytes of memory. With 228 cores, it has a maximum result rate of 1263 GFLOPs. The Altix uses Intel Itanium2 CPUs running at 1.4, 1.5 or 1.6 GHz. These processors run at twice the speed suggested by the clock rate due to each processor having two floating point math units. The Altix is a cluster of shared memory nodes with from 2 to 72 CPU cores and up to 465 gigabytes of memory on any given node. It has a fibre channel disk array using SGI CXFS file system. The older nodes are Altix 350 series nodes which support up to 16 CPU cores and the newer nodes are Altix 450 series nodes supporting up to 72 CPU cores. It uses the SUSE Linux operating system. The large amount of memory per node has made the Altix a valuable resource for users with jobs requiring more memory than is available on a single node of any other academic computing system in the state.

The Cray XD1 was installed in 2004 and decommissioned January 1, 2009.  The XD1 product line came to Cray through the acquisition of a company called OctigaBay.  It used AMD Opteron CPUs, which were connected to a built in router via the Hypertransport bus.  These routers were interconnected via up to 12 Infiniband lines per 6-node chassis using a Cray-written protocol to form the RapidArray communication system.  The Cray XD1 included 144 AMD Opteron processors running at 2.2 GHz, 240 gigabytes of memory, and 7 terabytes of shared disk.  Six of the nodes had FPGA (Field Programmable Gate Array) as reconfigurable coprocessors.  It used the SUSE Linux operating system.  The entire system had a maximum result rate of 634 GFLOPs.

The Dense Memory Cluster (DMC) installed in 2008 and has been expanded multiple times since then.  It is a fat node cluster architected at the Alabama Supercomputer Center.  It was put together from components bought from Microway, Voltaire, Novell, Panasas, Spectrum, Penguin, Cisco, Dell and other vendors.  It initially retasked the disk trays purchased for the Cray XD1 with the IBM GPFS file system until the Panasas file system server was purchased.  The nodes all contain x86_64 architecture processors, including various generations of AMD Opteron and Intel Xeon chips.  The node configurations range from 8 CPU cores and 24 gigabytes of memory up to 16 cores and 128 gigabytes of memory.  Each node has a local /tmp disk with from 850 GB to 4 TB of temporary working space.  As of October, 2013 the DMC had 1800 CPU cores, 10.1 terabytes of memory, and 225 terabytes of disk space.  It has a maximum result rate of 16.5 TFLOPs.  It uses the SUSE Linux operating system. 

The DMC was further expanded with GPU (graphic processing unit) math coprocessors.  GPUs are an adaptation of the technology in graphics card chips to act as general purpose mathematics processors.  The first series of GPUs to be installed were eight nVidia Tesla T10 chips, the first GPU to have double precision mathematics capability.  Each T10 chip had a total of 240 cores, arranged into 30 multiprocessors each with single precision cores, double precision cores, and special function units for handling transcendental functions.   The second generation of GPUs installed consisted of eight nVidia Fermi T20 chips, each with 448 cores similarly arranged into multiprocessors. 

An SGI Ultraviolet 2000, named UV, was installed at the end of 2012.  It has a login node consisting of twelve processor cores, and a single large compute node with 256 processor cores and 4 TB of memory.  The processors on the UV are Sandy Bridge series Intel Xeon processors.  The Sandy Bridge chips support 256 bit AVX vector instructions, which can potentially give a 2X performance increase per core over the older Nehalem series Xeon chips in the DMC, which only support 128 bit SSE vector instructions.  The UV is the first system to have a capacity to run with small sections of memory chips taken off line, thus minimizing the need for unscheduled shutdowns to replace failed memory DIMMs. 

Since the time of the Cray XD1 and SGI Altix, the supercomputers have been interconnected.  Users can login on any one of the login nodes and see the same home directory files.  A Torque queue system with a Moab scheduler is used to run calculations on any one of the clusters, regardless of where the job was submitted.  By the end of 2013, ASA was providing over two hundred software applications, development libraries, utilities, and compilers.  There were just over 750 user accounts on the system.  These systems are used every semester for teaching classes at the universities in Alabama, and are constantly in use for graduate thesis work. 

Over the years, the Alabama Supercomputer Center has seen a massive growth in computer processing power.  From the Cray X-MP days up to the UV / DMC configuration the CPU processing power has grown over 180,000 fold.  The memory capacity has grown 447,000 fold, and the disk capacity has grown over 20,000 fold.  This incredible growth in capacity has been mirrored by an incredible growth in demand.  The academic computing community is perpetually in an arms race situation in which a computer that could do world-class work five years ago isn’t capable of doing publishable work today.