hide random home http://www.gmd.de/SCAI/ParComp.html (Einblicke ins Internet, 10/1995)

Parallel Computing

Research Area "Parallel Computing"

Introduction

The last ten years have seen the employment of high-performance computers for the solution of complex mathematical, scientific, and technical problems develop into a key technology. However, despite the enormous advances in computer performance witnessed in recent years, today's machines and methods fall far short of providing a computational solution to many fundamental applications - the so-called Grand Challenges.

The Grand Challenges require an increase in computational power of several orders of magnitude. Since the speed of the fastest processors already approach the limits set by the laws of physics, such increase will only be feasible through the integration of hundreds or thousands of powerful processors into a highly or massively parallel computer. In principle there is no limit to the aggregate speed of parallel computers, although the growing communication requirements limit the size of practical systems. Parallel computers are also superior to conventional systems if one considers their cost-effectiveness. For high-performance single processors price grows superlinearly with speed. Thus, a machine employing off-the-shelf processors is usually much less expensive than a sophisticated single processor with the same speed.

Parallel computers exhibit a broad spectrum of architectures. On the coarsest level they can be categorized as

In order to achieve the same overall performance, a system may use many standard processors or a few high-speed ones. This choice leads to a broad diversity in granularity and hardware topology. Indeed, for suitable applications, interconnected workstations (multi-workstation networks) can also be considered as parallel computers.

The biggest, and still essentially unsolved, problem for the employment of such systems is the development of programming models, algorithms, and software. Although a variety of competing models have been developed, including message-passing, data-parallel programming, and the virtual shared memory concept, no standard programming model which satisfies the needs of all applications has been found yet.

Another important aspect of the software problem is the parallelization of existing sequential applications. In a commercial environment, where substantial resources have been invested in software, it is generally not acceptable to rewrite the large application programs. Although some experimental automatic parallelization tools have been developed, they are not applicable in the most general case, and the resulting code tends to be inefficient.

Numerical applications typically possess a certain inherent amount of parallelism. However, this parallelism must be expressed in terms of an algorithmic formulation and translated into software. The sequential ordering of events found in conventional algorithms matches the single-processor computer. Also, the efficient use of parallel computers with distributed memory requires the exploitation of the data locality which can be found in most important applications. Therefore, parallel processing now requires a complete rethinking of basic algorithms with regard to their potential parallelism and data locality. In general this approach leads to more satisfactory results than a formal parallelization of existing sequential algorithms.

The situation is somewhat different for combinatorial problems and symbolic applications. Here, moving onto a parallel machine can be even harder, since quite a number of combinatorial optimization problems can be shown formally to resist parallelization. There are methods that are suitable for solving combinatorial problems on parallel machines, at least heuristically, but bringing combinatorial optimization problems onto parallel machines still requires much basic research.

Today, bringing activities onto parallel machines is still much more cumbersome than doing so on established architectures. Therefore parallel implementation is only feasible for applications in which models are well understood, in general, and bottlenecks have been identified. For other applications, precursory research must be done in order to develop robust models and identify bottlenecks that are suitable for parallelization. In summary, the aim of GMD's research area Parallel Computing is to provide a breakthrough in parallel methodology and to help generate wide acceptance of parallel computers in applications.

On the methodical side, the research activities have concentrated on the development of dynamic algorithms, portable programming environments, parallel system software, and parallel hardware. On the application side, we have implemented numerical codes for weather forecasting and computer chemistry on a host of parallel machines. In precursory research, we are investigating Grand Challenge applications as to their need and potential for parallelization.

Prof. Dr. Trottenberg has been appointed the coordinator of GMD's research area Parallel Computing.

Parallel Computing Research from Applications to Hardware

The paradigm shift towards parallelism leads to changes on all levels, from the machine hardware to the application programs. In this process a close collaboration between all levels is an essential prerequisite. For example, parallel programming tools can only be useful if they are designed for the specific needs of application programmers, and the parallel system runs much more efficiently if hardware and operating system are consistent. With projects ranging from hardware design to applications programming, the GMD research area Parallel Computing therefore provides an ideal environment for this kind of cooperation which has proved very successful in many projects.

Diagram of the activities in the research area "Parallel Computing"

The Figure gives an overview of the subjects covered by GMD projects in the research area Parallel Computing. The area comprises the work of the two GMD institutes SCAI (Sankt Augustin) and FIRST (Berlin). To a first approximation, the projects of SCAI and FIRST are complementary. The dashed line, somewhat arbitrarily, separates the main work areas, with SCAI concentrating on applications and algorithms, and FIRST on hardware and system software. In reality, however, there is no clear-cut separation line. Both divisions have projects on both sides of the border line. This fact manifests the interdependence and cooperation between SCAI and FIRST.

The main research area of SCAI within the research area Parallel Computing is to develop parallel methods and algorithms for applications. This includes precursory research on established sequential architectures, especially in the area of molecular bioinformatics. The parallelization of large production codes, such as, for example, the IFS weather forecast program, or applications from the chemical industry, is a central activity in this area. Furthermore, new efficient algorithms are developed, in particular multigrid solvers for PDE applications. The goal of the DYMOS project of GMD FIRST is the simulation of smog situations in the Berlin area. Several other projects at SCAI and FIRST deal with the simulation of physical processes on parallel computers.

With the growing complexity of technical simulations, the visualization of processes and results becomes the key to extracting useful information. In several application projects the SCAI visualization lab provides the necessary data processing and presentation.

FIRST participates in Japan's Real World Computing Program with the PROMOTER project. The aim of this project is to develop a new programming model for massively parallel applications and to implement it on the 1000 processor system of RWC.

The development of parallel software is greatly facilitated by graphical programming tools, like those which are produced by the GRACIA project at SCAI. The main target of this cooperation with industry is the development of the TRAPPER-Toolbox, which is used in parallel real-time applications in vehicle research.

Machine-independence of application software is an important goal in parallel computing. This portability can be achieved on the basis of different programming models.

Several projects at GMD deal with the definition and implementation of portability platforms, using either message-passing, or data-parallelism, or virtual shared memory. Important contributions of SCAI have been the creation of the PARMACS interface and the co-authorship of the international MPI message-passing standard.

The automatic generation of message-passing programs from high-level languages is a research topic of both divisions. In the SNAP project at FIRST an automatic parallelizing compiler for FORTRAN 77 and Fortran 90 was developed. At SCAI, the ADAPTOR tool translates data-parallel Fortran, namely HPF and the Fortran dialect for the TMC Connection Machine, into message passing code. The SNAP approach is more ambitious because it starts from purely sequential source codes. On the other hand, it is more research-oriented, as the results are usually not as efficient as those produced by tools like ADAPTOR.

Several compiler projects at FIRST are targeted towards parallel languages. For example, a parallel Lisp compiler based on the PEACE operating system was developed, and FIRST also participates in the European PREPARE project (Programming Environment for Parallel Architectures). Contributions of GMD-FIRST to PREPARE include a Fortran90/HPF front-end, the Smart transformation tool and several compiler components implementing analysis and transformation methods that are needed in a parallelizing and optimizing compiler.

All hardware and operating system work is concentrated in FIRST. The main hardware activities were the construction of the SUPRENUM computer, and the MANNA project. A new architecture is under development in the META project. The PEACE family of operating systems for parallel machines has originally been developed for SUPRENUM. An object-oriented version of PEACE is now machine-independent and is in use on a variety of different parallel computers.


GMD Home Page SCAI Home Page

1994-11-30, Elke Finke