In the mid 1980s, Alabama Governor George C. Wallace visited Japan. Over the course of that trip he became convinced that fostering technology related industries would help position Alabama for a brighter future. Near this time period, Governor Wallace hit upon the idea of establishing a supercomputer center in Alabama as a means for accomplishing this goal. In 1985, Governor Wallace appointed a twelve-member organization called the Alabama Supercomputer Network Authority (ASNA) to oversee the final planning for and operation of the supercomputer facility and network. This organization and the attending state budget item were officially established with the approval of the Alabama Supercomputer Authority Act in 1989. Prior to this time, all supercomputer centers were federally funded, thus making the Alabama Supercomputer Center the first state supercomputer center in the country.

Dr. Jim Woodward, UAB Senior Vice President, was the first director of ASNA and the first Chairman of the Authority's Board of Directors. Dr. Woodward was very instrumental from the beginning in getting the project off the ground.

The directors of this fledgling organization had to immediately start hiring a staff to carry out these ambitious plans. Dr. Ray Toland was the first CEO of the Alabama Supercomputer Network from 1988 to 1989. Dr. Ben B. Barnes was then hired as the Chief Executive Officer of the Alabama Supercomputer Authority. Wayne Whitmore was hired as the Chief Operations Officer. The initial ASA staff was kept small, and remains small to this day. This was based on an early strategic decision that technical services should be outsourced via a competitively bid contract. This format fosters price efficiency through competition for the contract and allows the state to leverage the abilities of professional information technology firms to adapt quickly to the changing needs of the Alabama Supercomputer Center.

The decision was made to build the Alabama Supercomputer Center in Huntsville's Cummings Research Park after the city of Huntsville donated the land. In 1987 Governor Guy Hunt attended the dedication of the building, which is still in use today (shown in Figure 10.1). The Alabama Supercomputer Center building has 3,065 square feet of computer room floor space (currently 50% open) with an additional 23,500 square feet of office, storage, and meeting space.

The first systems integration contractor was Boeing Computer Services under the direction of program manager Dr. Melvin Scott. Boeing began work in 1987 to install a Cray X-MP supercomputer in conjunction with the completion of the building. The Cray X-MP went into operation February of 1988. The managerial, help desk, system administration, and network staff were located in the Alabama Supercomputer Center. The majority of the direct support of the users of this system was provided by campus analysts. The campus analysts were Ph.D. level experts on various supercomputer applications, who were physically located on the campuses of the research universities.

The Cray X-MP/24 had two central processing units (CPUs) and eight vector processing units, and had 32 megabytes of memory. It was water cooled by chill water that was cooled by a cooling tower on the roof of the building. The computer room floor was covered by washing machine size disk drives and refrigerator size reel-to-reel tape drives. It cost around $6 million and was less powerful than most laptop computers sold today. A few years later, the Cray was upgraded from a Cray X-MP/24 to an X-MP/216, which increased the memory to 128 megabytes.

A second supercomputer, an nCUBE 2 Model 10 was put in service in 1991. It was a massively parallel computer (by the standards of its day) that had 128 CPUs arranged in a hypercube topology. The processors were relatively weak even by the standards of their day. In retrospect, the nCUBE was a computer ahead of its time by being designed to run programs on multiple CPUs at once. It wasn't until 2004 that the majority of the calculations run at the Alabama Supercomputer Center were utilizing multiple processors.

In 1993 the professional services contract was rebid and won by Nichols Research Corp. This brought about a change of much of the technical staff under the direction of Nichols program manager David Ivey.

In these early days of the Internet, connections between machines were made with command line utilities like "telnet" and "ftp". In 1991 it became possible to access public network sites with a non-graphical client called Gopher. In 1993, the first graphical web browser called Mosaic was introduced. At this time, the Alabama Supercomputer Network was the only state-wide network in Alabama. ASA began providing network services to K-12 schools, junior colleges, libraries and other institutions, thus allowing them to connect to one another and the Internet.

In 1994 the Cray X-MP was decommissioned and replaced with a Cray C90, specifically a C94A/264 system. Like the Cray X-MP, the Cray C90 had two processors and eight vector units, but they ran much faster than those on the older Cray X-MP. It also gave a four fold increase in memory, up to 512 megabytes. Along with the Cray C90, a StorageTek 4400 tape silo was put in place for data archival.

In 1999, the Cray C90 was replaced by a Cray SV1. The Cray SV1 had sixteen CPUs and 32 vector processing units. It also had 16 gigabytes of memory. The Cray SV1 was the last of the "big iron" machines that looked about the same the day they were decommissioned as the day they were installed. Subsequent systems have been clusters that can be incrementally expanded each year as budget and demand dictates.

Late in 1999, Nichols Research Corp was merged into CSC (Computer Sciences Corporation). Although the supercomputer center staff stayed the same, they were now under a much larger corporate umbrella.

Shortly after the introduction of the Cray SV1, the Alabama Supercomputer Authority went through some lean times. Dr. Barnes retired in September 1998 and there was a lag of several years before Mr. Randy Fulmer was hired as the new CEO in May of 2002. In the intervening years, the budget was cut significantly. Although the Cray SV1 and network continued to be solid work horses, there were a number of significant cuts in services including the loss of the campus analyst program.

In 2004 the professional services contract was rebid and awarded to CSC. Almost immediately the Cray SV1 was decommissioned and replaced by a Cray XD1 and a SGI Altix 350 system, both initially funded by NASA educational outreach grants.

The SGI Altix 350 is a cluster of shared memory nodes, initially purchased with 56 CPUs. It has been incrementally expanded annually to include more processors, memory, and disk capacity. The expansions made from 2006 on have been in the form of Altix 450 series nodes. As this account is being written the Altix cluster has 228 CPU cores, 1.5 terabytes of memory, and 10.8 terabytes of disk space.

The Cray XD1 is distributed memory cluster, initially purchased with 144 processors. It had FPGA co-processors added, and received file system and memory expansions. It eventually contained 6 FPGA chips, 240 gigabytes of memory, and 7 terabytes of disk space. The Cray XD1 was decommissioned in January of 2009, shortly after Cray discontinued support for this model.

In anticipation of the Cray XD1 shutdown, work on building up a new cluster started in 2008. This is a locally architected, fat node cluster, called the Dense Memory Cluster or DMC.  At the time of this writing, the DMC was still growing and had 1800 CPU cores, 10.1 terabytes of memory, and 225 terabytes of disk space.  The DMC was further enhanced by the addition of a small test bed of GPU math coprocessor chips.

An SGI Ultraviolet 2000 was received at the end of 2012.  It was purchased as a replacement for the SGI Altix systems which would be decommissioned a few months later.  The Ultraviolet was purchased with a small login node, and a single, large compute node consisting of 256 processor cores and 4 TB of memory.

The Alabama Supercomputer Center plays a role as the computing technology leader in Alabama. Most of the high performance computing systems have had high-end features not available in other academic computing facilities in the state, such as vector processors, a hypercube interconnect, shared memory, fibre channel disks, solid state disks, a RapidArray interconnect, FPGAs, and GPUs. The Cray SV1 was the first SV1 system installed in the country. The Cray XD1 installation was a tie between the Alabama Supercomputer Center and Oakridge National Lab getting the first XD1 systems in the country on the same day. In testing and using these new systems, the work done at the Alabama Supercomputer Center has influenced the way that the systems are manufactured and features present in the operating system.

In recent years a number of changes have been made to the physical facilities at the Alabama Supercomputer Center. Additional power capacity has been added by the replacement and expansion of UPS systems. Longer term disaster mode operations have been assured through the addition of a diesel generator. The initial air conditioning systems have been replaced and expanded in order to accommodate more equipment on the floor. These improvements have made it possible for the Alabama Supercomputer Center to provide more services than ever before.