[Beowulf] small-footprint MS WIn "MinWin"

Nathan Moore ntmoore at gmail.com
Wed Oct 24 13:19:19 PDT 2007


I came to the party late and admit to only reading a few of the messages.
My apologies if you already mentioned what I suggested.

As a grad student, I spent a summer porting comp chem and bioinformatics
packages to BGL.  Once you know the routine, the porting process is fairly
straight-forward.  The original vendor specifications make the machine
fairly unique in the context of your regular linux box, ie, each compute
node is actually two processors that run at or below 1GHz.  The compute
processors have a fairly exotic (ie powerful) math fpu attached, also, the
compute processors have small memory and no disk (512MB per two-core compute
node as I recall - that's probably been upped since I was there.)

In line with these hardware specs, BGL is fantastic at compute jobs that
were naturals for parallelization (the 2-d solution to the laplace equation
for example).  If your code doesn't scale well beyond 128 nodes, then you've
got the abillity to run lots of jobs in parallel.

The PowerPC 440 chip used actually comes from the embedded market.  The
story as I heard it was that it was chose in part because of the low power
comsumption.

NT Moore

On 10/24/07, Robert G. Brown <rgb at phy.duke.edu> wrote:
>
> On Wed, 24 Oct 2007, Nathan Moore wrote:
>
> > Your message misses the point.  If you're running an architecture that
> has
> > thousands of cpu cores on it, it is a colossal waste to run the normal
> set
> > of schedulers and deamons on every core.  The efficient use of such a
> > resource is to only bother with multitasking and the user experience on
> > nodes that the user will access - ie the compile/submit node.
>
> Well, I thought that I said that (not in this last message, but before).
> Something about how very large systems shift the cost-benefit...
>
> If not, I stand corrected.
>
> > With BGL/BGP you write code in C, C++, or Fortran and then send it to a
> > special compiler (a variant of xlc or xlf).  Given that a small job on a
> > Blue Gene is 512  nodes, you code will include MPI calls.  The core
> itself
> > is a PowerPC variant, so if you want to get into fancy stuff like loop
> > unrolling and the like its not a stretch if you're already familiar with
> > hand-coding for a Power architecture (think P-series, or Apple's G3/4/5
> > chip).  If you're unambitious :), IBM has a fast-math library for the
> Power
> > series that works pretty well...
> >
> > In some sense BGL is the essence of a "compute" node.
>
> So it sounds like it is easier than I make it out to be, which is great!
> Would you say that the result is really a commodity cluster, or is it
> more of a componentitized supercomputer?  Scyld performs a very similar
> function (as was also noted on the thread) on commodity hardware, so the
> choice of Scyld is made mostly independent of the particular mix of
> processor, network(s) and so on used in the nodes.
>
>     rgb
>
> --
> Robert G. Brown
> Duke University Dept. of Physics, Box 90305
> Durham, N.C. 27708-0305
> Phone(cell): 1-919-280-8443
> Web: http://www.phy.duke.edu/~rgb
> Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
>



-- 
- - - - - - -   - - - - - - -   - - - - - - -
Nathan Moore
Assistant Professor, Physics
Winona State University
AIM: nmoorewsu
- - - - - - -   - - - - - - -   - - - - - - -
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20071024/a994d1b7/attachment.html>


More information about the Beowulf mailing list