Beowulf History

by Phil Merkey

In late 1993, Donald Becker and Thomas Sterling began sketching the outline of a commodity-based cluster system designed as a cost-effective alternative to large supercomputers. In early 1994, working at CESDIS under the sponsorship of the HPCC/ESS project, the Beowulf Project was started.

The initial prototype was a cluster computer consisting of 16 DX4 processors connected by channel bonded Ethernet. The machine was an instant success and their idea of providing COTS (Commodity Off The Shelf) based systems to satisfy specific computational requirements quickly spread through NASA and into the academic and research communities.

Some of the major accomplishments of the Beowulf Project will be chronicled below, but a non-technical measure of success is the observation that researchers within the High Performance Computer community now refer to such machines as "Beowulf Class Cluster Computers." Beowulf clusters are now recognized as genre within the HPC community.

The next few paragraphs will provide a brief history of the Beowulf Project and discussion of certain aspects or characteristics of the Project that appear to be key to its success.

The Center of Excellence in Space Data and Information Sciences (CESDIS) is a division of the non-profit University Space Research Association (USRA). CESDIS is located at the Goddard Space Flight Center in Greenbelt Maryland and supported in part by the NASA Earth and Space Sciences (ESS) project.

One of the goals of the ESS project is to determine the applicability of massively parallel computers to the problems faced by the Earth and space sciences community. The first Beowulf was built to address problems associated with the large data sets that are often involved in ESS applications.

Contributing Factors
A number of factors have contributed to the growth of Beowulf class computers.

The combination of the these conditions: hardware, software, experience and expectation, provided the environment that made the development of Beowulf clusters a natural evolutionary event.

Perhaps equally important to the Beowulf project, as the performance improvements in microprocessors, are the recent cost/performance gains in network technology. The history of MIMD computing chronicles many academic groups and commercial vendors who built multiprocessor machines based on what were then the state-of-art microprocessors. They always required special "glue" chips or one-of-a-kind interconnection schemes.

For the academic community this lead to interesting research and the exploration of new ideas, but also resulted in one-of-a-kind machines. The life cycle of such machines usually correlated to the life cycle of the graduate careers of those working on them. Vendors often made choices for special features or interconnection schemes to enhance certain characteristics of their equipment or to tailor machines for a particular market. Exploiting these enhancements required programmers to adopt a vendor-specific programming model. This often led to dead ends with respect to software development. The cost-effectiveness and Linux support for high performance networks for PC class machines has enabled the construction of balanced systems built entirely of COTS technology which has made generic architectures and programming models practical.

The first Beowulf was built with DX4 processors and 10Mbit/s Ethernet. The processors were too fast for a single Ethernet and Ethernet switches were still expensive. To balance the system, Don Becker rewrote his Ethernet drivers for Linux and built a "channel bonded" Ethernet where the network traffic was striped across two or more Ethernets. As 100Mbit/s Ethernet and 100Mbit/s Ethernet switches have become cost effective, the need for channel bonding has diminished (at least for now).

By late 1997, a good choice for a balanced system was 16, 200MHz P6 processors connected by Fast Ethernet and a Fast Ethernet switch. The exact network configuration of a balanced cluster continues to change and remain dependent on the size of the cluster and the relationship among processor speed, network bandwidth, and the current price list for the components. An important characteristic of Beowulf clusters is that substituting processor type and speed, network technology, or relative costs of components do not change the programming model. Therefore, users of these systems can expect to enjoy more forward compatibility then we have experienced in the past.

Another contributing factor to forward compatibility is the system software used on Beowulf. With the maturity and robustness of Linux, GNU software, and the "standardization" of message passing via PVM and MPI, programmers now have a guarantee that the programs they write will run on future Beowulf clusters---regardless of who makes the processors or the networks.

The Community Behind Beowulf
The first Beowulf was built to address a particular computational requirement of the ESS community by and for researchers with parallel programming experience. Many of these researchers had different goals and expectations than the MPP vendors and system administrators relative to detailed performance information, development tools, and new programming models. This lead to a "do-it-yourself" attitude. In addition, access to a large machine often meant access to only a tiny fraction of the resources of the machine shared among many users.

For these users, building a cluster that they could completely control and fully utilize resulted in a more effective, higher performance computing platform. The fact that the components were affordable increased the value to this segment of the research community. While learning to build and run a Beowulf cluster was a considerable investment, there were substantial benefits to not being tied to a proprietary solution.

These hard core parallel programmers were first and foremost interested in high performance computing applied to difficult problems. At Supercomputing '96 both NASA and DOE demonstrated clusters costing less than $100,000 that achieved greater than a gigaflop/s sustained performance. A year later, NASA researchers at Goddard Space Flight Center combined two clusters for a total of 199, P6 processors and ran a PVM version of a PPM (Piece-wise Parabolic Method) code at a sustained rate of 10.1 Gflop/s. In the same week (on the floor of Supercomputing '97) Caltech's 140 node cluster ran an N-body problem at a rate of 10.9 Gflop/s.

While not all Beowulf clusters are supercomputers, one can build a Beowulf that is powerful enough to attract the interest of supercomputer users. Beyond the seasoned parallel programmer, Beowulf clusters have been built and used by programmers with little or no parallel programming experience. Beowulf clusters provide universities, often with limited resources, an excellent platform to teach parallel programming courses and provide cost-effective computing to their computational scientists as well. The startup cost in a university situation is minimal for the usual reasons: most students interested in such a project are likely to be running Linux on their own computers so setting up a lab and learning to write parallel programs is part of the learning experience.

Growth of Beowulf
The Beowulf Project grew from the first Beowulf machine and likewise the Beowulf community has grown from the NASA project. Like the Linux community, the Beowulf community is a loosely organized confederation of researchers and developers. Each organization has its own agenda and its own set of reasons for developing a particular component or aspect of the Beowulf system. As a result, Beowulf class cluster computers range from several node clusters to several hundred node clusters. Some systems have been built by computational scientists and are used in an operational setting, others have been built as test-beds for system research and others are serve as inexpensive platforms to learn about parallel programming.

Most people in the Beowulf community are independent and self-sufficient. Since everyone is pursuing such different avenues, the notion of having a central control within the Beowulf community would be too limiting. The community is held together by the willingness of its members to share ideas and discuss successes and failures in their development efforts. The mechanisms that facilitate this interaction are sites like www.beowulf.org, the Beowulf mailing lists, individual web pages and occasional meetings or workshops.

The future of the Beowulf project will be determined collectively by the individuals and organizations contributing to the Beowulf project and by the future of mass-market COTS. As microprocessor technology continues to evolve, higher speed networks become cost-effective, and more application developers move to parallel platforms, the Beowulf project will continue to advance.

Phil Merkey is an assistant professor in Mathematics and Computer Science at Michigan Technological University.

Back to top