I came to the party late and admit to only reading a few of the messages. My apologies if you already mentioned what I suggested.<br><br>As a grad student, I spent a summer porting comp chem and bioinformatics packages to BGL. Once you know the routine, the porting process is fairly straight-forward. The original vendor specifications make the machine fairly unique in the context of your regular linux box, ie, each compute node is actually two processors that run at or below 1GHz. The compute processors have a fairly exotic (ie powerful) math fpu attached, also, the compute processors have small memory and no disk (512MB per two-core compute node as I recall - that's probably been upped since I was there.)
<br><br>In line with these hardware specs, BGL is fantastic at compute jobs that were naturals for parallelization (the 2-d solution to the laplace equation for example). If your code doesn't scale well beyond 128 nodes, then you've got the abillity to run lots of jobs in parallel.
<br><br>The PowerPC 440 chip used actually comes from the embedded market. The story as I heard it was that it was chose in part because of the low power comsumption. <br><br>NT Moore<br><br><div><span class="gmail_quote">
On 10/24/07, <b class="gmail_sendername">Robert G. Brown</b> <<a href="mailto:rgb@phy.duke.edu">rgb@phy.duke.edu</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Wed, 24 Oct 2007, Nathan Moore wrote:<br><br>> Your message misses the point. If you're running an architecture that has<br>> thousands of cpu cores on it, it is a colossal waste to run the normal set<br>> of schedulers and deamons on every core. The efficient use of such a
<br>> resource is to only bother with multitasking and the user experience on<br>> nodes that the user will access - ie the compile/submit node.<br><br>Well, I thought that I said that (not in this last message, but before).
<br>Something about how very large systems shift the cost-benefit...<br><br>If not, I stand corrected.<br><br>> With BGL/BGP you write code in C, C++, or Fortran and then send it to a<br>> special compiler (a variant of xlc or xlf). Given that a small job on a
<br>> Blue Gene is 512 nodes, you code will include MPI calls. The core itself<br>> is a PowerPC variant, so if you want to get into fancy stuff like loop<br>> unrolling and the like its not a stretch if you're already familiar with
<br>> hand-coding for a Power architecture (think P-series, or Apple's G3/4/5<br>> chip). If you're unambitious :), IBM has a fast-math library for the Power<br>> series that works pretty well...<br>>
<br>> In some sense BGL is the essence of a "compute" node.<br><br>So it sounds like it is easier than I make it out to be, which is great!<br>Would you say that the result is really a commodity cluster, or is it
<br>more of a componentitized supercomputer? Scyld performs a very similar<br>function (as was also noted on the thread) on commodity hardware, so the<br>choice of Scyld is made mostly independent of the particular mix of
<br>processor, network(s) and so on used in the nodes.<br><br> rgb<br><br>--<br>Robert G. Brown<br>Duke University Dept. of Physics, Box 90305<br>Durham, N.C. 27708-0305<br>Phone(cell): 1-919-280-8443<br>Web: <a href="http://www.phy.duke.edu/~rgb">
http://www.phy.duke.edu/~rgb</a><br>Lulu Bookstore: <a href="http://stores.lulu.com/store.php?fAcctID=877977">http://stores.lulu.com/store.php?fAcctID=877977</a><br></blockquote></div><br><br clear="all"><br>-- <br>- - - - - - - - - - - - - - - - - - - - -
<br>Nathan Moore<br>Assistant Professor, Physics<br>Winona State University<br>AIM: nmoorewsu <br>- - - - - - - - - - - - - - - - - - - - -