[Beowulf] standards for GFLOPS / power consumption measurement?

Douglas Eadline - ClusterWorld Magazine deadline at clusterworld.com
Tue May 10 06:15:58 PDT 2005


On Tue, 10 May 2005, Vincent Diepeveen wrote:

> How do you categorize second hand bought systems?

You have an amount of money. You buy from widely available (reproducible)
sources the components for you cluster. You build the cluster.  You run
HPL.

> 
> I bought for 325 euro a third dual k7 mainboard + 2 processors. 
> 
> The rest i removed from old machines that get thrown away otherwise. 
> Like 8GB harddisk. Amazingly biggest problem was getting a case to reduce
> sound production :)
> 
> Network cards i got for free, very nice gesture from someone.
> 
> So when speaking of gflops per dollar at linpack, this will destroy of
> course any record of $2500 currently, especially for applications needing
> bandwidth to other processors, if i see what i paid for this self
> constructed beowulf.
> 

The question is how "reproducible" is your system. the components from our
system were available from multiple sources so that others could reproduce
it. Building "stone soup" systems is an interesting project, but in
general these systems are not reproducible. (Of course there is a
"reproducible window as parts go off market. For instance we originally
going to use Netgear (Broadcom) NICs however, these were discontinued so
we used Intel. Our goal was to allow the system to be reproduced by
others.)

ClusterWorld Magazine has a continuing series on building/running/using
the system and the rationale for why we chose certain parts etc.

--Doug

PS our system nodes are diskless -- Warewulf

> At 05:19 PM 5/9/2005 -0400, Douglas Eadline - ClusterWorld Magazine wrote:
> >On Thu, 5 May 2005, Ted Matsumura wrote:
> >
> >> I've noted that the orionmulti web site specifies 230 Gflops peak, 110 
> >> sustained, ~48% of peak with Linpack which works out to ~$909 / Gflop ?
> >>  The Clusterworld value box with 8 Sempron 2500s specifies a peak Gflops
> by 
> >> measuring CPU Ghz x 2 (1 - FADD, 1 FMUL), and comes out with a rating of
> 52% 
> >> of peak using HPL @ ~ $140/Gflop (sustained?)
> >
> >It is hard to compare. I don't know what sustained or peak means in the
> >context of their tests. There is the actual number (which I assume is
> >sustained) then the theoretical peak (which I assume is peak).
> >
> >And our cost/Gflop does not take into consideration the construction 
> >cost. In my opinion when reporting these type of numbers, there
> >should be two categories "DIY/self assembled" and "turn-key". Clearly
> >Kronos is DIY system and will always have an advantage of a 
> >turnkey system.
> >
> >
> >>  So what would the orionmulti measure out with HPL? What would the 
> >> Clusterworld value box measure out with Linpack?
> >
> >Other benchmarks are here (including some NAS runs):
> >
> >http://www.clusterworld.com/kronos/bps-logs/
>                                                  
> >
>      
> >
> >>  Another line item spec I don't get is rocketcalc's ( 
> >> http://www.rocketcalc.com/saturn_he.pdf )"Max Average Load" ?? What does 
> >> this mean?? How do I replicate "Max Average Load" on other systems??
> >>  I'm curious if one couldn't slightly up the budget for the clusterworld
> box 
> >> to use higher speed procs or maybe dual procs per node and see some 
> >> interesting value with regards to low $$/Gflop?? Also, the clusterworld
> box 
> >> doesn't include the cost of the "found" utility rack, but does include the 
> >> cost of the plastic node boxes. What's up with that??
> >
> >This was explained in the article. We assumed that shelving was optional 
> >because others my wish to just put the cluster on existing shelves or 
> >table top (or with enough Velcro strips and wire ties build a standalone 
> >cube!)
> >
> >Doug
> >> 
> >
> >----------------------------------------------------------------
> >Editor-in-chief                   ClusterWorld Magazine
> >Desk: 610.865.6061                            
> >Cell: 610.390.7765         Redefining High Performance Computing
> >Fax:  610.865.6618                www.clusterworld.com
> >
> >_______________________________________________
> >Beowulf mailing list, Beowulf at beowulf.org
> >To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> >
> >
> 

----------------------------------------------------------------
Editor-in-chief                   ClusterWorld Magazine
Desk: 610.865.6061                            
Cell: 610.390.7765         Redefining High Performance Computing
Fax:  610.865.6618                www.clusterworld.com




More information about the Beowulf mailing list