From jim at ks.uiuc.edu Mon Jul 2 22:04:59 2001 From: jim at ks.uiuc.edu (Jim Phillips) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] NAMD 2.3b2 Release Announcement Message-ID: +--------------------------------------------------------------------+ | | | NAMD 2.3b2 Release Announcement | | | +--------------------------------------------------------------------+ July 2, 2001 The Theoretical Biophysics Group at the University of Illinois is proud to announce the public release of a new version of NAMD, a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD is distributed free of charge and includes source code. NAMD development is supported by the NIH National Center for Research Resources. NAMD 2.3b2 has several advantages over NAMD 2.2: - AMBER file compatibility (parm and coordinate input only). - The new psfgen tool for building PSF structure files. - Simpler to run on a single workstation. (No more rsh!) - New ports to the Compaq AlphaServer SC and Scyld Beowulf. - Improved serial performance, particularly with PME on Alpha. NAMD is available from http://www.ks.uiuc.edu/Research/namd/. For your convenience, NAMD has been ported to and will be installed on both the PSC TCS1 Alpha cluster and the NCSA Platinum Linux cluster. Please consider the performance advantages of running NAMD when you apply for time on these new resources. Benchmarks for your proposal are available at http://www.ks.uiuc.edu/Research/namd/performance.html The Theoretical Biophysics Group encourages NAMD users to be closely involved in the development process through reporting bugs, contributing fixes, periodical surveys and via other means. Questions or comments may be directed to namd@ks.uiuc.edu. We are eager to hear from you, and thank you for using our software! From agrajag at scyld.com Wed Jul 18 07:17:41 2001 From: agrajag at scyld.com (Sean Dilda) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] Next Scyld Release Message-ID: <20010718071449.A15500@blueraja.scyld.com> This hit our website this morning and I thought I'd forward it along since so many have been wondering about our next release. The list of new features is down near the bottom of the press release. It's not mentioned in the press release, but this next release also includes all the Red Hat errata (minus rpm4), including a kernel based off of Red Hat's 2.2.19 errata kernel. Scyld Computing Corporation Releases Latest Professional Version of Next Generation Beowulf Clustering ANNAPOLIS, MD (July 18, 2001) - Scyld Computing Corporation today released the Scyld Beowulf Professional Edition, the latest version of its next generation cluster operating system software. Professional Scyld Beowulf greatly simplifies cluster setup, integration and administration, while providing seamless scalability. Coupled with documentation and support from the original Beowulf development team, Professional Scyld Beowulf provides the first true clustering solution that can be installed and run directly out of the box. Beowulf cluster systems connect a series of computers together, using a modified version of Linux, to form a parallel processing supercomputer. The Scyld Beowulf Operating System improves upon traditional Beowulf clusters as all operations performed on the linked cluster nodes are initiated and administered through a single master node. Its Single System Image (SSI) design makes the cluster act and feel like a single computer. This drastically simplifies maintenance and eliminates common pitfalls of clustering such as version skew and runaway jobs. Scyld Beowulf only needs to be installed on one master node and will run clusters of hundreds of compute nodes. Professional Scyld Beowulf is a complete software system. No other software is required to create the cluster. For industries such as energy, biotechnology and finance, there is growing commercial demand for High Performance Computing (HPC) clusters. Companies are looking for more cost-effective ways to analyze enormous amounts of data. One example of this demand is in the petroleum industry, where 3-D seismic modeling is used to locate oil fields. Because large amounts of data must be processed to create an image of a complex geology, seismic modeling is a very compute-intensive process. Such imaging turns raw, unprocessed data into a coherent image that accurately depicts the situation below the earth's surface, saving millions in reduced exploration and drilling costs and improved production. Scyld Beowulf has distinct price advantages over traditional Symmetric Multiprocessing (SMP) or vector supercomputers. Using commodity servers and hardware components, a Scyld Beowulf based system can provide the same amount of computing power at 10%-50% of the cost. Continued savings are realized by the reduced costs of supporting a broad-based operating system as compared to the expensive maintenance contracts required for proprietary operating systems of traditional SMP or vector supercomputers. Companies can save even more by leveraging their existing hardware and installing Scyld Beowulf, allowing them to set up a cluster system for a fraction of the cost of traditional solutions. In addition, Scyld Beowulf's simplified master node installation and administration eliminates the administrative costs and potential risks inherent in other HPC clusters that require nodes to be administered separately. Donald Becker, Scyld's founder and Chief Technology Officer, and other Scyld developers were the original architects of Beowulf computing while working at NASA as research scientists. That same team has improved and adapted the technology for the commercial market, using the same exacting standards for software engineering, development, quality control, test methodologies and support that NASA uses to ensure successful missions. The result is Professional Scyld Beowulf. "Our mission at Scyld is to create software that will bring cost effective, easily managed, high-performance computing to the commercial marketplace", said Becker. "The new features in this release add significantly to our existing standard upon which high performance cluster applications have been developed. This new release will further stimulate deployment of turn-key commercial applications". Scyld has formed and is in the process of forming partnerships and alliances with many clustering industry leaders including, Compaq, Arrow/Wyle, API, Penguin Computing, GIGABYTE Server Group, PSSC Labs, Aspen Systems, Atipa Technologies, RLX Technologies, CFI, Racksaver, eLinux, Cendio Systems, Western Scientific, VA Linux, and Myricom. amongst others. Scyld works closely with its partner's hardware systems to certify seamless operation of hardware platforms such as Compaq's Proliant DL380 and DL360 servers, Compaq's DS10 series AlphaServers, API's CS20 and UP2000+, Penguin Computing's servers, RLX Technologies' System 324 Blade technology, and Intel's ServerBoards. In addition, Scyld has partners and alliances with traditional parallel application and tool providers such as, MPI Software Technology Inc., Veridian PBS-Pro, Wolfram Mathematica, NAG, Absoft, Lahey, Backbone Networks, NAMD, CHARMM, and TurboGenomics. Scyld has a formal channel program to authorize and train value added resellers to provide off-the-shelf, fully integrated and supported turnkey cluster systems. New enhancements on the latest Professional Version include, full Alpha support including simplified installation tools, full Myrinet and Gigabit Ethernet support, the Scyld Beowulf Batch Queue system (BBQ), automatic node addition, web based administration and job monitoring, advanced hardware health and status monitoring, Parallel Virtual File System (PVFS), NFSv3, and ROMIO file systems, updated MPIch library, and much more. For detailed pricing and more information visit the Scyld website at www.scyld.com. Professional Scyld Beowulf comes bundled with one year of support from the original development team and full documentation. The professional documentation set includes: Installation, System Administration, Users Guide, and Programmers Reference information. In addition, Scyld offers Beowulf clustering certification training from the original Scyld Beowulf developers. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 232 bytes Desc: not available Url : http://www.scyld.com/pipermail/beowulf-announce/attachments/20010718/49ef83fc/attachment.bin From sarah at cacr.caltech.edu Sun Jul 29 12:25:27 2001 From: sarah at cacr.caltech.edu (Sarah Emery Bunn) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] Cluster 2001 Conference Oct 8-11 Message-ID: <3B619A81.2E9E3A7C@cacr.caltech.edu> IEEE CLUSTER 2001 The IEEE International Conference on Cluster Computing Sutton Place Hotel, Newport Beach, California, USA Oct. 8-11, 2001 Sponsored by: The IEEE Computer Society, through the Task Force on Cluster Computing (TFCC) The rapid emergence of COTS-based Cluster Computing as a major strategy for delivering high performance to technical and commercial applications is driven by the superior cost effectiveness and flexibility achievable through ensembles of PCs, workstations, and servers. Cluster computing, such as Beowulf class, SMP clusters, and ASCI machines is redefining the manner in which parallel and distributed computing is being accomplished today and is the focus of important research in hardware, software, and application development. This year, for the first time, Cluster 2001 merges five popular professional conferences and workshops: IWCC, PC-NOW, CCC, JPC4 and German CC into an integrated, large-scale, international forum to be held in Northern America. Last year's conference, IEEE Cluster 2000, was held in Chemnitz, Germany. The program includes an introduction by Thomas Sterling (Caltech/JPL), keynotes by Steve Oberlin (Unlimited Inc), Hans Zima (University of Vienna), and Charles Seitz (Myricom), over 46 invited and contributed papers, three panel sessions, a poster session, an exhibition, and six tutorials. Additionally, several social events are planned, including a banquet on the Queen Mary in nearby Long Beach. For further information: http://www.cacr.caltech.edu/cluster2001/