FAST

The Facility for Advanced Scalable computing Technology

Objective

The Facility for Advanced Scalable computing Technology (FAST) will open up new, unclassified opportunities for research in fields such as energy, ecology, and biosciences at Lawrence Livermore National Laboratory (LLNL). FAST's massively parallel processor, archival storage system, and high-speed networking capabilities provide the computing resources that will be needed.

Impact

The shift in focus at LLNL towards collaborations with others makes a computing facility such as the FAST essential to provide:


Changes in the global political climate as well as changes in the United States economic climate are providing opportunities for LLNL to "reinvent" itself. High-performance computing has always been a priority at LLNL but most computing resources were devoted to support national defense. As we pursue new areas of research and economic competitiveness through partnerships with industry and other DOE laboratories, the need for computing resources in an unclassified environment is critical. The FAST will provide LLNL with such an environment.

FAST scales to the needs of a diverse application base via a massively parallel processor (MPP), an archival storage system, and high-speed networking for connectivity. To leverage past experience and to make support more manageable, the FAST configuration will be similar to that of the LLNL classified computing environment:

MPP

Because today's microprocessors perform as well or better than the supercomputers of a decade ago, the new microprocessor-based computer technology called MPP has emerged. This type of computer provides scalability by combining large numbers of microprocessors to create a high- performance computer that can process large numbers of serial jobs as well as a single or several highly parallel ones. LLNL has developed a model that allows MPPs to support our unique production workload. The FAST's MPP will be a Meiko CS-2, configured initially as 48 computational nodes with a planned expansion of up to 64 nodes.

Archival Storage

Increases in computing power and memory size make it possible to solve more complex problems with potentially very large datasets. While the size of the on-line disk cache for today's computers has also increased, there is still never enough disk space. Another problem is how to handle file backups. On a workstation this is not an issue because the amount of data to be backed up is relatively small. However, the size of the online disk cache for supercomputers and MPPs makes backing up impossible. A solution is an archival storage system. Using file transfer protocols, files can be moved from the online disks to the storage system. Not only does this free up the online disk cache (a scarce resource), it provides a much needed backup capability.

The FAST will use an NSL-UniTree system to satisfy current storage requirements. This is a commercial product based on work done at the National Storage Laboratory (NSL), a Cooperative Research and Development Agreement of which LLNL is a member. It provides support for scalability and network-attached peripherals, it uses multiple hierarchies of storage devices to utilize archival devices with a variety of performance/cost characteristics and it supports industry-standard file transfer protocols, FTP and NFS. The FAST will take advantage of NSL-UniTree's scalability by implementing the storage system in two phases. Phase I will provide 50 gigabytes (GB) of disk cache and 1.3 terabytes (TB) of tape archive. Phase II will expand the disk cache to 200 GB, the tape archive to 20 TB, and add 150 GB of HIPPI-attached Redundant Array of Inexpensive Disk (RAID).

The next-generation storage system is the Scalable I/O Facility (SIOF), which will provide fast, scalable, parallel I/O for MPPs using the High-Performance Storage System (HPSS) being developed at the NSL. HPSS is the follow-on to NSL-UniTree and will add a much needed parallel capability to increase performance.

High-Speed Networking

Connectivity between the components of the FAST will be provided by state-of-the-art, standards-based network technology. The primary backbone will be an FDDI ring that will route network traffic to LLNL Open Labnet, through which it is possible to reach ESnet, the Energy Sciences Network. ESnet is a wide-area network (WAN) that serves the National Energy Research Supercomputer Center as well as other government research institutions. Traveling the information superhighway to reach FAST will be important to the success of collaborations with others.

In addition to FDDI technology, the FAST will have a HIPPI switch with both the Meiko CS-2 and NSL-UniTree storage system connected to it. With HIPPI connectivity it will be possible to exploit the third-party data transfers (data moves directly to/from HIPPI attached devices to the client processor's memory) available with NSL-UniTree.

Collaborations

Initially, the FAST will support two collaborative development projects between DOE laboratories and industry. The first is the Gas and Oil National Infrastructure Initiative, a collaboration between Lawrence Livermore, Sandia, Los Alamos, and Oak Ridge National Laboratories. This project involves the generation of synthetic seismic datasets to be used by the oil and gas industry to determine where oil deposits might be located. The calculations for generating the datasets will be divided up and executed at all four laboratories. FAST will store the datasets, approximately 2 TB of data. The LLNL Intelligent Archive project is developing tools to assist the oil and gas industry in the searching, browsing, and accessing of such large datasets.

A second project is the DOE Digital superLab under the auspices of the Defense Information Infrastructure Initiative. This is a collaboration between three DOE laboratories-Lawrence Livermore, Sandia, and Los Alamos. They will develop a transparent Digital superLab that appears as a single distributed computing environment via a wide-area network for scientists at the three laboratories. The FAST's Meiko CS-2 and the SIOF will both be integrated into the superLab.


Last Updated: October 25, 1994

If you have questions about this page contact:

pgh@llnl.gov -- Pam Hamilton

and LLNL Disclaimers
UCRL-MI 118937