[Beowulf] a "microwulf" posted

Peter St. John peter.st.john at gmail.com
Fri Aug 31 12:27:41 PDT 2007


Kilian,
I'm a noobie at the hardware issues but your machine at Sanford sounds like
great bang/buck. Plainly economies of scale work both ways (I can't imagine
wiring up that many nodes with ethernet). The factor of five thing sounds
like a careless gloss of some numbers from some edition of the top500 lists,
from some date, by INQ. Adams' site seems more modest. What I love about it
is that it completely specs out what he did, very clearly. He gets an A for
documentation, I'm not such a n00b at documentation :-)
Peter


On 8/31/07, Kilian CAVALOTTI <kilian at stanford.edu> wrote:
>
> On Friday 31 August 2007 09:37:49 am Peter St. John wrote:
> > I saw at wiki http://en.wikipedia.org/wiki/Beowulf_%28computing%29
> > that someone posted a link to
> > http://www.calvin.edu/~adams/research/microwulf/, a nice description
> > of a "microwulf" at Calvin College. It has brief but useful
> > descriptions of it's design, cost broken down by parts in the
> > Manifest, and price/performance specs.
> > If LLNL is an Epic maybe this is only a limmerick, but it's a witty
> > limmerick.
>
> It's also been posted on the INQ:
> http://www.theinquirer.net/default.aspx?article=42050
>
> It's very interesting from the design point of view, but I'm less
> convinced by the price tag argument.
>
> Stanford University recently purchased a 276 dual-socket quad-core HPC
> cluster, which is capable of 20.6TFlops (peak) and actually achieved
> 15.5TFlops (it has been ranked #54 in the latest Top500 list).
>
> The cost per GFlops has been $109, which is indeed more expensive than
> the microwulf's $94/GFlops, but still far from being "five times as
> much", as claimed by the INQ article. Besides, the price included
> software (a commercial scheduler) and support, management capabilities,
> 50TB high-performance storage, an administration GigE network and a DDR
> Infiniband interconnect.
>
> I guess it's easier to lower the cost per GFlop on large scale clusters,
> but I'd just wanted to put this into perspective, and show that you
> don't necessarily need to build your cluster from scratch to get
> comparable price/performance ratios.
>
> Cheers,
> --
> Kilian
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20070831/da48ad2e/attachment.html>


More information about the Beowulf mailing list