History of ASA

The Alabama Supercomputer Authority has a long history of technical leadership and service to the state of Alabama. The sections below will look at that history from several different perspectives. The first section gives an overview of ASA's development in terms of people, dates, and only a cursory discussion of the technology. The second section discusses the supercomputers that have been at the Alabama Supercomputer Center. This is followed by sections on the growth of the Alabama Research and Education Network and technological developments made at ASC. The final section discusses how the role that ASA plays in Alabama has evolved over time.

The Birth of a Supercomputer Center

In the mid 1980s, Alabama Governor George C. Wallace visited Japan. Over the course of that trip he became convinced that fostering technology related industries would help position Alabama for a brighter future. Near this time period, Governor Wallace hit upon the idea of establishing a supercomputer center in Alabama as a means for accomplishing this goal. In 1985, Governor Wallace appointed a twelve-member organization called the Alabama Supercomputer Network Authority (ASNA) to oversee the final planning for and operation of the supercomputer facility and network. This organization and the attending state budget item were officially established with the approval of the Alabama Supercomputer Authority Act in 1989. Prior to this time, all supercomputer centers were federally funded, thus making the Alabama Supercomputer Center the first state supercomputer center in the country.

Dr. Jim Woodward, UAB Senior Vice President, was the first director of ASNA and the first Chairman of the Authority's Board of Directors. Dr. Woodward was very instrumental from the beginning in getting the project off the ground.

The directors of this fledgling organization had to immediately start hiring a staff to carry out these ambitious plans. Dr. Ray Toland was the first CEO of the Alabama Supercomputer Network from 1988 to 1989. Dr. Ben B. Barnes was then hired as the Chief Executive Officer of the Alabama Supercomputer Authority. Wayne Whitmore was hired as the Chief Operations Officer. The initial ASA staff was kept small, and remains small to this day. This was based on an early strategic decision that technical services should be outsourced via a competitively bid contract. This format fosters price efficiency through competition for the contract and allows the state to leverage the abilities of professional information technology firms to adapt quickly to the changing needs of the Alabama Supercomputer Center.

The decision was made to build the Alabama Supercomputer Center in Huntsville's Cummings Research Park after the city of Huntsville donated the land. In 1987 Governor Guy Hunt attended the dedication of the building, which is still in use today (shown in Figure 10.1). The Alabama Supercomputer Center building has 3,065 square feet of computer room floor space (currently 50% open) with an additional 23,500 square feet of office, storage, and meeting space.

The first systems integration contractor was Boeing Computer Services under the direction of program manager Dr. Melvin Scott. Boeing began work in 1987 to install a Cray X-MP supercomputer in conjunction with the completion of the building. The Cray X-MP went into operation February of 1988. The managerial, help desk, system administration, and network staff were located in the Alabama Supercomputer Center. The majority of the direct support of the users of this system was provided by campus analysts. The campus analysts were Ph.D. level experts on various supercomputer applications, who were physically located on the campuses of the research universities.

The Cray X-MP/24 had two central processing units (CPUs) and eight vector processing units, and had 32 megabytes of memory. It was water cooled by chill water that was cooled by a cooling tower on the roof of the building. The computer room floor was covered by washing machine size disk drives and refrigerator size reel-to-reel tape drives. It cost around $6 million and was less powerful than most laptop computers sold today. A few years later, the Cray was upgraded from a Cray X-MP/24 to an X-MP/216, which increased the memory to 128 megabytes.

A second supercomputer, an nCUBE 2 Model 10 was put in service in 1991. It was a massively parallel computer (by the standards of its day) that had 128 CPUs arranged in a hypercube topology. The processors were relatively weak even by the standards of their day. In retrospect, the nCUBE was a computer ahead of its time by being designed to run programs on multiple CPUs at once. It wasn't until 2004 that the majority of the calculations run at the Alabama Supercomputer Center were utilizing multiple processors.

In 1993 the professional services contract was rebid and won by Nichols Research Corp. This brought about a change of much of the technical staff under the direction of Nichols program manager David Ivey.

In these early days of the Internet, connections between machines were made with command line utilities like "telnet" and "ftp". In 1991 it became possible to access public network sites with a non-graphical client called Gopher. In 1993, the first graphical web browser called Mosaic was introduced. At this time, the Alabama Supercomputer Network was the only state-wide network in Alabama. ASA began providing network services to K-12 schools, junior colleges, libraries and other institutions, thus allowing them to connect to one another and the Internet.

In 1994 the Cray X-MP was decommissioned and replaced with a Cray C90, specifically a C94A/264 system. Like the Cray X-MP, the Cray C90 had two processors and eight vector units, but they ran much faster than those on the older Cray X-MP. It also gave a four fold increase in memory, up to 512 megabytes. Along with the Cray C90, a StorageTek 4400 tape silo was put in place for data archival.

In 1999, the Cray C90 was replaced by a Cray SV1. The Cray SV1 had sixteen CPUs and 32 vector processing units. It also had 16 gigabytes of memory. The Cray SV1 was the last of the "big iron" machines that looked about the same the day they were decommissioned as the day they were installed. Subsequent systems have been clusters that can be incrementally expanded each year as budget and demand dictates.

Late in 1999, Nichols Research Corp was merged into CSC (Computer Sciences Corporation). Although the supercomputer center staff stayed the same, they were now under a much larger corporate umbrella.

Shortly after the introduction of the Cray SV1, the Alabama Supercomputer Authority went through some lean times. Dr. Barnes retired in September 1998 and there was a lag of several years before Mr. Randy Fulmer was hired as the new CEO in May of 2002. In the intervening years, the budget was cut significantly. Although the Cray SV1 and network continued to be solid work horses, there were a number of significant cuts in services including the loss of the campus analyst program.

The Alabama Supercomputer Authority has seen a resurgence of growth under the leadership of CEO Randy Fulmer, formerly an executive for Bell South. In addition to expansions of network and HPC systems, ASA has added and expanded new services such as hosting, disaster recovery, distance learning, and software development. These expanded activities are discussed further in the section on ASA's Expanding Mission later.

In 2004 the professional services contract was rebid and awarded to CSC. Almost immediately the Cray SV1 was decommissioned and replaced by a Cray XD1 and a SGI Altix 350 system, both initially funded by NASA educational outreach grants.

The SGI Altix 350 is a cluster of shared memory nodes, initially purchased with 56 CPUs. It has been incrementally expanded annually to include more processors, memory, and disk capacity. The expansions made from 2006 on have been in the form of Altix 450 series nodes. As this account is being written the Altix cluster has 228 CPU cores, 1.5 terabytes of memory, and 10.8 terabytes of disk space.

The Cray XD1 is distributed memory cluster, initially purchased with 144 processors. It had FPGA co-processors added, and received file system and memory expansions. It eventually contained 6 FPGA chips, 240 gigabytes of memory, and 7 terabytes of disk space. The Cray XD1 was decommissioned in January of 2009, shortly after Cray discontinued support for this model.

In December of 2006 Wayne Whitmore retired from his post as the Chief Operations Officer for the Alabama Supercomputer Authority. Mr. Whitmore had overseen the daily operations at the Alabama Supercomputer Center since its inception.

In anticipation of the Cray XD1 shutdown, work on building up a new cluster started in 2008. This is a locally architected, fat node cluster, called the Dense Memory Cluster or DMC.  At the time of this writing, the DMC was still growing and had 1800 CPU cores, 10.1 terabytes of memory, and 225 terabytes of disk space.  The DMC was further enhanced by the addition of a small test bed of GPU math coprocessor chips.

An SGI Ultraviolet 2000 was received at the end of 2012.  It was purchased as a replacement for the SGI Altix systems which would be decommissioned a few months later.  The Ultraviolet was purchased with a small login node, and a single, large compute node consisting of 256 processor cores and 4 TB of memory.

The Alabama Supercomputer Center plays a role as the computing technology leader in Alabama. Most of the high performance computing systems have had high-end features not available in other academic computing facilities in the state, such as vector processors, a hypercube interconnect, shared memory, fibre channel disks, solid state disks, a RapidArray interconnect, FPGAs, and GPUs. The Cray SV1 was the first SV1 system installed in the country. The Cray XD1 installation was a tie between the Alabama Supercomputer Center and Oakridge National Lab getting the first XD1 systems in the country on the same day. In testing and using these new systems, the work done at the Alabama Supercomputer Center has influenced the way that the systems are manufactured and features present in the operating system.

In recent years a number of changes have been made to the physical facilities at the Alabama Supercomputer Center. Additional power capacity has been added by the replacement and expansion of UPS systems. Longer term disaster mode operations have been assured through the addition of a diesel generator. The initial air conditioning systems have been replaced and expanded in order to accommodate more equipment on the floor. These improvements have made it possible for the Alabama Supercomputer Center to provide more services than ever before.

An organization such as the Alabama Supercomputer Authority is not an island onto itself. ASA has had many partnerships with other organizations over the years. At present (2009) these partnerships include; the State Department of Education, the Department of Postsecondary Education, the Alabama Public Library Service, the Alabama Virtual Library, The University of Alabama, Gulf Central Gigapop/ Internet2/State Regional Optical Network, and NASA.

The Supercomputers

The following is a more technical description of the previous and current computing systems at the Alabama Supercomputer Center. In reading this discussion, please note the units behind the numbers. Over the years, measurement of data capacities has gone from megabytes to gigabytes to terabytes, and measurements of processing ability have shifted from MFLOP to GFLOP to TFLOP.

The first supercomputer at ASC was the Cray X-MP (Figure 10.2). This computer still sits on the floor, purely as a museum piece. It has one plexiglass panel, so that students touring the center can see the mass of hand-wired connections inside.

The Cray X-MP/24 (circa 1987) was a 64 bit computer with two central processing units (CPUs), eight vector processing units, and 32 megabytes of memory. This gave a maximum result rate of 117 MFLOPs (million floating point operations per second). It had a 256 megabyte solid state disk for temporary working files and 15 gigabytes of hard disk space. The operating system on the X-MP was UNICOS, Cray's implementation of the Unix operating system.

Users would access the Cray X-MP by using dumb terminals (a keyboard and screen with no memory or computing ability) that were either directly connected to or dialing via a phone modem into a local server. The local servers on each campus

were minicomputers, either VAX 8250 systems or IBM 9370 computers at some of the smaller sites. Initially ASA provided four software packages and four libraries of math and graphics functions for people who wrote their own software. Most of the "graphics" consisted of creating files that could be sent to a pen plotter or printer.

In the summer of 1992, ASA added a UniTree Mass Storage Subsystem (MSS). This was an early version of a network file system (NFS). It was connected to both the Cray X-MP and the nCUBE. It consisted of 13.7 gigabytes of disk storage attached to an IBM RS/6000 Model 530H workstation and an autoloader tape robot. The tape robot could hold 54 8mm tapes, each holding 5 gigabytes of data.

The nCUBE 2 Model 10 (circa 1991) had 128 CPUs each of which was on a single node (a small computer with its own memory and operating system). These nodes were arranged in a hypercube architecture. It had 464 megabytes of memory, which was distributed with some nodes having 16 megabytes, some nodes having 4 megabytes and most of the nodes having 1 megabyte. There were 11 gigabytes of internal disk space. The nCUBE was accessed via a front end computer, which was a Sun 4/470 workstation running Unix. In contrast to the X-MP which had two powerful CPUs, the nCUBE had 128 weak CPUs. In total the nCUBE was actually about 5% less powerful than the X-MP. When it was decommissioned, the nCUBE was donated to Auburn University.

The Cray C90 was a model C94A/264 system installed in 1994. The Cray C90 had two processors and eight vector units, which gave a maximum result rate of 960 MFLOPs. It had 512 megabytes of memory, and a 256 megabyte solid state disk. The attached disk array had 50 gigabytes of storage. At this time ASA provided 50 software packages, compilers, and math libraries. There were a little over 200 users on the system. The Cray C90 used the UNICOS operating system.

Along with the Cray C90, a StorageTek 4400 tape silo was put in place for data archival. The StorageTek used 1/2 inch 18-track tapes, which were automatically moved to the tape drives by a PowderHorn robot arm. It could store 2.1 terabytes of data. When the StorageTek was decommissioned, a buyer could not be found... its 18 foot, octagonal housing is now a lawn mower shed belonging to one of the system administrators.

The Cray SV1 was installed in 1999. This was the first SV1 delivered to a customer. It was initially delivered with J90 CPUs, which were replaced with SV1 CPUs six months later. The Cray SV1 had sixteen CPUs and 32 vector processing units which gave a maximum result rate of 1.2 GFLOPs (billion floating point operations per second). It also had 16 gigabytes of memory, and 480 gigabytes of RAID-3 fibre channel disk storage. It was connected to the network via a fibre distributed data interface (FDDI) ring. At this time, ASC had some smaller servers for visualizationwork which were a Sun Sparcstation 10 and a SGI Indigo 2. The SV1 was the last UNICOS based system at the Alabama Supercomputer Center. When it was decommissioned in 2004, the Cray SV1 was sold to a private Cray museum.

The SGI Altix system was first installed in 2004. It started out as a system with 56 CPU cores and to date has been expanded to 228 CPU cores and 1.5 terabytes of memory. With 228 cores, it has a maximum result rate of 1263 GFLOPs. The Altix uses Intel Itanium2 CPUs running at 1.4, 1.5 or 1.6 GHz. These processors run at twice the speed suggested by the clock rate due to each processor having two floating point math units. The Altix is a cluster of shared memory nodes with from 2 to 72 CPU cores and up to 465 gigabytes of memory on any given node. It has a fibre channel disk array using SGI CXFS file system. The older nodes are Altix 350 series nodes which support up to 16 CPU cores and the newer nodes are Altix 450 series nodes supporting up to 72 CPU cores. It uses the SUSE Linux operating system. The large amount of memory per node has made the Altix a valuable resource for users with jobs requiring more memory than is available on a single node of any other academic computing system in the state.

The Cray XD1 was installed in 2004 and decommissioned January 1, 2009.  The XD1 product line came to Cray through the acquisition of a company called OctigaBay.  It used AMD Opteron CPUs, which were connected to a built in router via the Hypertransport bus.  These routers were interconnected via up to 12 Infiniband lines per 6-node chassis using a Cray-written protocol to form the RapidArray communication system.  The Cray XD1 included 144 AMD Opteron processors running at 2.2 GHz, 240 gigabytes of memory, and 7 terabytes of shared disk.  Six of the nodes had FPGA (Field Programmable Gate Array) as reconfigurable coprocessors.  It used the SUSE Linux operating system.  The entire system had a maximum result rate of 634 GFLOPs.

The Dense Memory Cluster (DMC) installed in 2008 and has been expanded multiple times since then.  It is a fat node cluster architected at the Alabama Supercomputer Center.  It was put together from components bought from Microway, Voltaire, Novell, Panasas, Spectrum, Penguin, Cisco, Dell and other vendors.  It initially retasked the disk trays purchased for the Cray XD1 with the IBM GPFS file system until the Panasas file system server was purchased.  The nodes all contain x86_64 architecture processors, including various generations of AMD Opteron and Intel Xeon chips.  The node configurations range from 8 CPU cores and 24 gigabytes of memory up to 16 cores and 128 gigabytes of memory.  Each node has a local /tmp disk with from 850 GB to 4 TB of temporary working space.  As of October, 2013 the DMC had 1800 CPU cores, 10.1 terabytes of memory, and 225 terabytes of disk space.  It has a maximum result rate of 16.5 TFLOPs.  It uses the SUSE Linux operating system. 

The DMC was further expanded with GPU (graphic processing unit) math coprocessors.  GPUs are an adaptation of the technology in graphics card chips to act as general purpose mathematics processors.  The first series of GPUs to be installed were eight nVidia Tesla T10 chips, the first GPU to have double precision mathematics capability.  Each T10 chip had a total of 240 cores, arranged into 30 multiprocessors each with single precision cores, double precision cores, and special function units for handling transcendental functions.   The second generation of GPUs installed consisted of eight nVidia Fermi T20 chips, each with 448 cores similarly arranged into multiprocessors. 

An SGI Ultraviolet 2000, named UV, was installed at the end of 2012.  It has a login node consisting of twelve processor cores, and a single large compute node with 256 processor cores and 4 TB of memory.  The processors on the UV are Sandy Bridge series Intel Xeon processors.  The Sandy Bridge chips support 256 bit AVX vector instructions, which can potentially give a 2X performance increase per core over the older Nehalem series Xeon chips in the DMC, which only support 128 bit SSE vector instructions.  The UV is the first system to have a capacity to run with small sections of memory chips taken off line, thus minimizing the need for unscheduled shutdowns to replace failed memory DIMMs. 

Since the time of the Cray XD1 and SGI Altix, the supercomputers have been interconnected.  Users can login on any one of the login nodes and see the same home directory files.  A Torque queue system with a Moab scheduler is used to run calculations on any one of the clusters, regardless of where the job was submitted.  By the end of 2013, ASA was providing over two hundred software applications, development libraries, utilities, and compilers.  There were just over 750 user accounts on the system.  These systems are used every semester for teaching classes at the universities in Alabama, and are constantly in use for graduate thesis work. 

Over the years, the Alabama Supercomputer Center has seen a massive growth in computer processing power.  From the Cray X-MP days up to the UV / DMC configuration the CPU processing power has grown over 180,000 fold.  The memory capacity has grown 447,000 fold, and the disk capacity has grown over 20,000 fold.  This incredible growth in capacity has been mirrored by an incredible growth in demand.  The academic computing community is perpetually in an arms race situation in which a computer that could do world-class work five years ago isn’t capable of doing publishable work today.

 

The Rise of the Internet

The Alabama Supercomputer Authority was established before the World Wide Web with all of its commercial and free resources came into existence. Initially, the Alabama Supercomputer Network was simply a mechanism for researchers at a few select institutions to access the supercomputer. The institutions that could connect to the Alabama Supercomputer Center via this initial network were Auburn University, the Data Systems Management Division of the State of Alabama, the National Fertilizer Development Lab in Muscle Shoals, Troy State University, the University of North Alabama, The University of Alabama, the University of Alabama at Birmingham, the University of Alabama in Huntsville, and the University of South Alabama. By the standards of its day this was considered an "extensive" network.

The early network was a bridged, ethernet network. It consisted primarily of 56 kilobit circuits. The first Internet connection was a T1 connection from Birmingham to Atlanta which was shared with UAB. This was the first state-wide IP network in Alabama.

The network has seen a steady expansion, both in the number of organizations connected to the network and the bandwidth of the connections. That backbone has been expanded from T1 links to DS3 connections, to OC3 connections, to 10 gigabit fibre optic connections. The network backbone has evolved from a hub and spoke topology to having multiple rings. Atlanta and Dallas are now the Internet egress points for both commodity Internet as well as research and education specific Internet (I2, National Lambda Rail, etc.).  The funding for network upgrades comes from several different sources. The state government has given ASA a mandate to use a percentage of their state funding for providing network connections to schools. There is also a federally funded program called E-rate that provides grants for school network connections.

 

As this account was being written (2013) the bandwidth to the Internet was 24 gigabits per second and the bandwidth to Internet2 was 10 gigabits per second. There are now over 1000 pieces of network equipment in this network, which include routers, firewalls, switches, and servers for content filtering and spam filtering. The network, now called AREN (Alabama Research and Education Network), extends into every county in Alabama.

In January 2006 Governor Bob Riley established the ACCESS program (Alabama Connecting Classrooms, Educators, and Students Statewide). This is a distance learning program that allows high school students to attend classes not taught at their school via video conference connections. The Alabama Research and Education Network (AREN) provides those network connections, including end-to-end quality of service and Multipoint Control Unit (MCU) services. The ACCESS program has won national recognition for its effective use of technology to enhance education.

Technology Development at ASC

The initial intent of the supercomputer authority was to provide high performance systems so that the presence of an educated work force would attract technology oriented businesses to Alabama. It was not the original intent to actually invent any new technology. However, when working with cutting edge technology, not everything always works as expected, and not all of the necessary pieces are always available. Thus it is a natural consequence that some technology will have been developed in order to make it possible to effectively utilize the supercomputer systems and run the state-wide network. The following are some of the technological developments that have been created at the Alabama Supercomputer Center.

The software for supercomputers is unlike PC software in that it must be recompiled to work on each architecture of supercomputer. ASC has several times had the first computer of a new model. As such, it has many times happened that the staff at the Alabama Supercomputer Center are the first in the world to try to compile a given software program to work on a new architecture of supercomputer. In the course of doing this, the staff works with the original developers giving them back the instructions for compiling on the new computer system.

The management of jobs on the supercomputers is done through a queue system. This queue system software has a rich set of features, which can be confusing to new users. In order to allow the supercomputer users to focus on their own work, the details of how to run various programs through the queue are coded into queue scripts which are a front end to the queue system. The user calls the queue script which simply asks how much memory, how many CPUs, etc. and the software runs without forcing the user to learn a whole suite of knowledge outside of their field. The queue scripts developed at the Alabama Supercomputer Center have been extremely popular with the users of these systems.

The ASC staff has written programs to constantly monitor supercomputers. This software immediately informs the system administrators when something goes wrong with the systems. It also monitors some indicators that problems will occur soon. For example, memory parity and hard drive CRC errors don't prevent the computer from functioning correctly, but they indicate that physical components will fail soon. These are monitored so that the problems can be fixed before they cause a system outage.

In order to run jobs on the supercomputers, the user must specify the amount of memory it needs and how long it will take to run. This is information that many users don't know in advance. The ASC staff wrote a program to predict these resource needs. The program is named Swami (a reference to an old Johnny Carson skit in which he plays a mystical Swami that predicts the answers to questions before he reads the question). Swami predicts the resource needs for some of the most heavily used quantum chemistry software packages. If it predicts poorly, the user can later upload the correct results to the program. It has an artificial intelligence algorithm that allows it to learn to make more accurate predictions as more data is fed in.

The Alabama Learning Exchange (ALEX) is a web based resource that allows teachers to exchange lesson plans. ALEX was developed by the ASC staff and is hosted at the Alabama Supercomputer Center. It has received national recognition in the education field.

The Alabama Virtual Library (AVL) is an online resource for accessing information in encyclopedias, books and journals. The AVL authentication server is hosted at the Alabama Supercomputer Center and the ASC staff provides all of the technical support for AVL.

Since the school systems in Alabama look to the Alabama Supercomputer Authority for their technology needs, it is perhaps only natural that ASA would be asked to develop new technical solutions for the schools. One such project is the development of the DAX (Data Access and Exchange) software for the Alabama Department of Postsecondary Education. This is a system that tracks enrollment in the two year colleges in Alabama. This data is used both for internal planning and to meet federal reporting requirements.

The Alabama Department of Postsecondary Education asked ASA to develop their AAESAP (Alabama Adult Eduction System Accountability and Performance) software. This is a state-wide database and web front end which tracks enrollment in adult education classes from learning to read up to GED certification. There is an annual employment survey to track how these educational activities have helped Alabama citizens move into better jobs.

Another tool architected at the Alabama Supercomputer Center is an anti-spam system, which filters out junk email traffic. Each school district can subscribe to this service. It utilizes multiple spam detection mechanisms to rate email messages with a spam score, which is used to filter or quarantine the spam email messages. This takes a major amount of the load off the network, as well as disposing of the vast majority of the advertising and fraudulent email messages that are so annoying.

The IP Address Tool is another piece of software developed at ASC. It is used to track how Internet IP addresses are assigned to all of the clients on the AREN network.

The Alabama Supercomputer Center has an employee on site 24x7 every day of the year. This person must identify when a piece of network equipment or supercomputer node goes down and start taking immediate actions to correct the problem. There was not a piece of software in existence to monitor this type of complex information technology environment so one had to be developed at the Alabama Supercomputer Center. The Webnet tool developed at ASC constantly monitors all of the equipment under ASA's control. It immediately notifies the operator when something goes wrong, gives information about the equipment, and even lists a history of past problems with that particular piece of equipment.

ASA's Expanding Mission

When ASA was initially established, it was hoped that the existence of this type of facility and the trained work force that it provides would attract technology oriented businesses to Alabama. A survey conducted in 2005 indicated that there are 6080 individuals in Alabama whose primary job focus is modeling and simulation. A financial analysis based on these survey results indicated that high performance computing has brought in $4.28 in federal academic funding into Alabama for every state dollar spent on ASA. When the benefit to the Alabama job market is taken into account, state dollars spent on high performance computing are leveraged 2000 to 1 in bolstering the economy of Alabama.

The role that the Alabama Supercomputer Authority plays in the state has increased dramatically. Today, all of the following are part of ASA's contribution to the state.

High Performance Computing - The currently trendy name for supercomputers. ASA's role as a computing technology leader benefits both the academic and commercial communities in Alabama.

Network Services - Connecting K-12 schools, colleges, universities, libraries and state agencies to the Internet and Internet2.

Disaster Recovery - The Alabama Supercomputer Center acts as a data backup location for a number of state agencies.

Hosting Services - Many of the smaller schools get their email and web hosting services from ASA. There are also larger hosting projects such as ALEX and AVL.

Software Development - ASA's technical expertise has been tapped to develop customized software solutions for other Alabama organizations.

Distance Learning - The highly successful ACCESS program is improving the quality of education in high schools across Alabama.

Economic Development - ASA continues to act as a technical resource and public relations vehicle for attracting technology oriented businesses to Alabama.

The Alabama Supercomputer Authority has a long history of technical leadership and service to the state of Alabama. This role has expanded from a supercomputer and a few network connections to multiple computing clusters, a massive network, and a whole host of information technology services. The individuals or organizations using these services are not forced to use ASA as their provider. People look to ASA for technical services because of the quality technical solutions, excellent support personnel, and advantageous cost structure. These trends are expected to continue into the future.