<div>Timothy,</div>
<div> I agree completely, I think Doug and the Kronos team did a most interesting DIY project. It makes me want to cluster all the PC's in the house together and run benchmarks, then I read the fine print and realized most of my hardware isn't regularly available anymore, except on ebay... ;)
</div>
<div> </div>
<div>Maybe Doug could start another DIY project using parts that must be used and donated, or have come from ebay??!!<br> </div>
<div><span class="gmail_quote">On 5/10/05, <b class="gmail_sendername">Timothy Bole</b> <<a href="mailto:tbole1@umbc.edu">tbole1@umbc.edu</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">this seems to me, at least, to be a bit of an unfair comparison. if<br>someone were to just give me a cluster with 80386 processors, then i would
<br>tie for the lead forever, as 0/{any number>0}=0. {not counting if someone<br>were to *pay* me to take said cluster of 80386's}...<br><br>having inhabited many an underfunded academic department, i have seen that<br>
there are many places where there is just not money to throw at any<br>research labs, including computational facilities. i think that the point<br>of the article was to demonstrate that one can build a useful beowulf for
<br>a dollar amount that is not unreasonable to find at small companies and<br>universities. not everyone can count on the generosity of strangers<br>handing out network cards and hubs. so, the US$/GFLOP is a decent, but
<br>*very* generic, means of getting the most of that generic dollar.<br><br>of course, the bottom line is that a cost benefit analysis for any cluster<br>is really necessary, and the typical type of problem to be run on said
<br>cluster should factor into this. i applaud the work of the KRONOS team<br>for demonstrating the proof-of-principle that one can design and build a<br>useful beowulf for US$2500.<br><br>cheers,<br>twb<br><br><br>On Tue, 10 May 2005, Vincent Diepeveen wrote:
<br><br>> How do you categorize second hand bought systems?<br>><br>> I bought for 325 euro a third dual k7 mainboard + 2 processors.<br>><br>> The rest i removed from old machines that get thrown away otherwise.
<br>> Like 8GB harddisk. Amazingly biggest problem was getting a case to reduce<br>> sound production :)<br>><br>> Network cards i got for free, very nice gesture from someone.<br>><br>> So when speaking of gflops per dollar at linpack, this will destroy of
<br>> course any record of $2500 currently, especially for applications needing<br>> bandwidth to other processors, if i see what i paid for this self<br>> constructed beowulf.<br>><br>> At 05:19 PM 5/9/2005 -0400, Douglas Eadline - ClusterWorld Magazine wrote:
<br>> >On Thu, 5 May 2005, Ted Matsumura wrote:<br>> ><br>> >> I've noted that the orionmulti web site specifies 230 Gflops peak, 110<br>> >> sustained, ~48% of peak with Linpack which works out to ~$909 / Gflop ?
<br>> >> The Clusterworld value box with 8 Sempron 2500s specifies a peak Gflops<br>> by<br>> >> measuring CPU Ghz x 2 (1 - FADD, 1 FMUL), and comes out with a rating of<br>> 52%<br>> >> of peak using HPL @ ~ $140/Gflop (sustained?)
<br>> ><br>> >It is hard to compare. I don't know what sustained or peak means in the<br>> >context of their tests. There is the actual number (which I assume is<br>> >sustained) then the theoretical peak (which I assume is peak).
<br>> ><br>> >And our cost/Gflop does not take into consideration the construction<br>> >cost. In my opinion when reporting these type of numbers, there<br>> >should be two categories "DIY/self assembled" and "turn-key". Clearly
<br>> >Kronos is DIY system and will always have an advantage of a<br>> >turnkey system.<br>> ><br>> ><br>> >> So what would the orionmulti measure out with HPL? What would the<br>> >> Clusterworld value box measure out with Linpack?
<br>> ><br>> >Other benchmarks are here (including some NAS runs):<br>> ><br>> ><a href="http://www.clusterworld.com/kronos/bps-logs/">http://www.clusterworld.com/kronos/bps-logs/</a><br>><br>> >
<br>><br>> ><br>> >> Another line item spec I don't get is rocketcalc's (<br>> >> <a href="http://www.rocketcalc.com/saturn_he.pdf">http://www.rocketcalc.com/saturn_he.pdf</a> )"Max Average Load" ?? What does
<br>> >> this mean?? How do I replicate "Max Average Load" on other systems??<br>> >> I'm curious if one couldn't slightly up the budget for the clusterworld<br>> box<br>> >> to use higher speed procs or maybe dual procs per node and see some
<br>> >> interesting value with regards to low $$/Gflop?? Also, the clusterworld<br>> box<br>> >> doesn't include the cost of the "found" utility rack, but does include the<br>> >> cost of the plastic node boxes. What's up with that??
<br>> ><br>> >This was explained in the article. We assumed that shelving was optional<br>> >because others my wish to just put the cluster on existing shelves or<br>> >table top (or with enough Velcro strips and wire ties build a standalone
<br>> >cube!)<br>> ><br>> >Doug<br>> >><br>> ><br>> >----------------------------------------------------------------<br>> >Editor-in-chief ClusterWorld Magazine
<br>> >Desk: 610.865.6061<br>> >Cell: 610.390.7765 Redefining High Performance Computing<br>> >Fax: 610.865.6618 <a href="http://www.clusterworld.com">www.clusterworld.com</a><br>
> ><br>> >_______________________________________________<br>> >Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>> >To change your subscription (digest mode or unsubscribe) visit
<br>> <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>> ><br>> ><br>> _______________________________________________<br>> Beowulf mailing list,
<a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>> To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf
</a><br>><br><br>=========================================================================<br>Timothy W. Bole a.k.a valencequark<br>Graduate Student<br>Department of Physics<br>Theoretical and Computational Condensed Matter
<br>UMBC<br>4104551924<br>reply-to: <a href="mailto:valencequark@umbc.edu">valencequark@umbc.edu</a><br><br><a href="http://www.beowulf.org">http://www.beowulf.org</a><br>=========================================================================
<br></blockquote></div><br>