From pw at osc.edu Wed Jan 9 17:46:01 2002 From: pw at osc.edu (Pete Wyckoff) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] ANNOUNCEMENT: mpiexec mailing list and mpich/p4 support Message-ID: <20020109155618.I17296@osc.edu> Mpiexec is a replacement program for the standard "mpirun" script that people have traditionally used to start parallel jobs. Mpiexec is used specifically to initialize a parallel job from within a PBS batch or interactive environment. Mpiexec uses the task manager library of PBS to spawn copies of the executable on the nodes in a PBS allocation. This is much faster than invoking a separate rsh once for each process. Another benefit is that resources used by the spawned processes are accounted correctly with mpiexec, and reported in the PBS logs. Plus there's lots of knobs you can twist to control job placement, input and output stream handling, and other variations. The distribution, including instructions for CVS access, can be found at http://www.osc.edu/~pw/mpiexec/ We've recently created a mailing list for mpiexec, mpiexec@osc.edu. You can subscribe using the standard mailman techniques; see http://email.osc.edu/mailman/listinfo/mpiexec for information and archives. The latest news is addition of support for those who use ethernet for message passing, using MPICH with its P4 library. The other MPI libraries supported are MPICH/GM (Myrinet) and EMP (research gigabit ethernet). I'd love to support LAM as well, but could use some help with that. Mpiexec is developed on a linux/ia64 environment, but there's no reason it shouldn't work on clusters using other POSIX-like operating systems. Patches to support other systems will be happily accepted. To use mpiexec in your cluster, you'll need to be willing to apply a small patch to your PBS distribution to use all the functionality of mpiexec. If you use MPICH/P4, you'll need to apply a rather large patch to MPICH, although the MPICH developers are working to apply much of it to their official distribution. -- Pete From rross at mcs.anl.gov Mon Jan 14 16:27:01 2002 From: rross at mcs.anl.gov (Robert Ross) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] PVFS v1.5.3 release Message-ID: The PVFS development team is happy to annouce the latest release of the Parallel Virtual File System (PVFS), version 1.5.3. PVFS is an open source parallel file system implementation for Linux clusters that operates over TCP/IP and uses existing disk hardware, meaning that you can implement a parallel file system on your cluster without additional hardware costs. This release includes a number of bug fixes and configuration improvements, many of which were contributed by users of PVFS. Additional debugging utilities make it ever easier to configure PVFS on your system, and the newest Linux 2.4 kernels are supported as well. This release represents a significant improvement in stability over the previous release, 1.5.2. As always, the GPL'd source for PVFS is available from: ftp://ftp.parl.clemson.edu/pub/pvfs For more information on PVFS, including papers, FAQ, User's Guide, and a Quick Start guide, see the PVFS home page: http://www.parl.clemson.edu/pvfs Regards, Rob (on behalf of the team) From rocketcalc at rocketcalc.com Mon Jan 21 20:02:00 2002 From: rocketcalc at rocketcalc.com (ROCKETCALC) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] Personal cluster computer Message-ID: <3C49D344.2000408@rocketcalc.com> Dear Beowulf readers: ROCKETCALC announces Redstone, the first personal cluster computer. Approximately the size of a mid-tower PC (17x11x19.5in), Redstone contains eight Pentium processors connected by 100Mbps switched ethernet and up to 8GB PC-133 SDRAM. Redstone runs on Motor, an embedded Linux distribution developed by Rocketcalc, and is easily managed by Houston graphical cluster management software. Redstone is designed to be easily integrated with Linux workstations and includes a CD-ROM collection of the most popular message-passing libraries and parallel utilities. Redstone is well-suited to parallel high-performance scientific computation, parallel application development, and classroom use. For pricing information and available configurations, please visit the webpage http://www.rocketcalc.com or send an e-mail to info@rocketcalc.com. Best Regards, Management ROCKETCALC LLC From jim at ks.uiuc.edu Wed Jan 30 19:33:00 2002 From: jim at ks.uiuc.edu (Jim Phillips) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] NAMD 2.4b1 (Parallel MD) Release Message-ID: Hi, NAMD is a free-as-in-beer-with-source-code parallel molecular dynamics program that runs quite well on even low-end clusters (our local clusters are 32 Athlons with fast ethernet) and extremely well on Myrinet clusters (up to 512 processors at NCSA). We provide binaries and even a Scyld Beowulf port (we run Scyld locally), so give it a try! -Jim +--------------------------------------------------------------------+ | | | NAMD 2.4b1 Release Announcement | | | +--------------------------------------------------------------------+ January 25, 2002 The Theoretical Biophysics Group at the University of Illinois is proud to announce the public release of a new version of NAMD, a parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD is distributed free of charge and includes source code. NAMD development is supported by the NIH National Center for Research Resources. NAMD 2.4b1 has several advantages over NAMD 2.3: - Greatly improved parallel scaling with particle mesh Ewald. - GROMACS ASCII topology and coordinate input file compatibility. NAMD is available from http://www.ks.uiuc.edu/Research/namd/. For your convenience, NAMD has been ported to and will be installed on the machines at the NSF-sponsored national supercomputing centers. If you are planning substantial simulation work of an academic nature you should apply for these resources. Benchmarks for your proposal are available at http://www.ks.uiuc.edu/Research/namd/performance.html The Theoretical Biophysics Group encourages NAMD users to be closely involved in the development process through reporting bugs, contributing fixes, periodical surveys and via other means. Questions or comments may be directed to namd@ks.uiuc.edu. We are eager to hear from you, and thank you for using our software! From hogue at mshri.on.ca Wed Jan 30 19:58:02 2002 From: hogue at mshri.on.ca (Christopher Hogue) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] New distribued computing project - distributedfolding.org Message-ID: <3C57386B.50ACBF36@mshri.on.ca> Hi Folks, Another distributed computing project has begun, you may want to take a look and if you have spare cycles, consider contributing your cluster. The project samples billions of protein 3D structures, and is detailed on the web site: www.distributedfolding.org A text mode client is available, instead of the Windows screensaver - for Beowulf cluster contributors. Binaries are avaliable on Win/Linux/LinuxPPC/Tru64/Irix/Solaris/HPUX-11 Mac OSX binaries will be posted shortly. There is an extensive FAQ on the web site: http://www.distributedfolding.org/faq.html Instructions for running the client non-interactively on a Cluster are provided in the readme: http://www.distributedfolding.org/readmeclient.html The project is also listed on intel.com/cure and has been extensively tested over the past year. Thanks for your consideration. Christopher Hogue Ph.D. Senior Scientist, Bioinformatics and Assistant Professor, Dept of Biochemistry, U. of Toronto Samuel Lunenfeld Research Institute Mt. Sinai Hospital 600 University Ave Toronto ON Canada