DUAL CPU board vs 2 Single CPU boards: bang for buck?

Kevin Van Workum vanw at tticluster.com
Thu Mar 7 11:03:35 PST 2002


On Thu, 7 Mar 2002, Jim Fraser wrote:

> I like this arrangement
> http://www.massiveparallel.com/air/aircooled.html
> I don't know the cost (or much about the system other then what's on the
> website) but it seems to me like a more cost effective approach as the
> investment is directed at buying compute cycles not heavy heat-trapping
> steel cases.
> 
> I am headed down to home depot to look into what draw-slides go for... ;-)

We use a similar custom built system to house our mother boards. Very 
cheap and just as effective at saving space.

One additional point to support the dual side of the argument: Many 
programmers don't know how to write good parallel code. Many new compilers 
are doing better at automatic parallelization for SMP's. This makes the 
dual's more attractive in that sense. I agree that (in general) it is 
basically a wash between the two paradigms.

Kevin Van Workum
www.tsunamictechnologies.com
ONLINE COMPUTER CLUSTERS

> 
> peace
> 
> Jim
> 
> 
> -----Original Message-----
> From: Robert G. Brown [mailto:rgb at phy.duke.edu]
> Sent: Thursday, March 07, 2002 12:04 PM
> To: Jim Fraser
> Cc: beowulf at beowulf.org
> Subject: RE: DUAL CPU board vs 2 Single CPU boards: bang for buck?
> 
> 
> On Thu, 7 Mar 2002, Jim Fraser wrote:
> 
> >
> > Robert,
> >
> > You bias the calculation a bit with the selection of your hardware...if
> you
> > use inexpensive COTS..we are talking something like 50 per case not
> > 350...also a decent mother board with NIC can be had for 150 and you
> need
> 
> I don't bias it at all.  Find me a 2U rackmount case for $75 and I'll
> buy it on the spot.
> 
> To put this in greater perspective:
> 
>   Cost of renovating a 40 m^2 server room in the physics department in
> Duke to provide 75,000 watts of electricity and 75,000 watts of
> continuous AC capacity: $150K, or nearly $4000/m^2.
> 
> In a 43U standard rack (allowing a few U at the top for punchblocks) I
> can fit perhaps 18 2U cases in roughly 1 m^2 of floor space, although in
> practical terms on cannot put anything like 40 racks in this room
> because one needs room to move around and because the overhead AC/return
> is too low in parts of the room to allow a 43U rack underneath it
> anyway.  We expect to get perhaps 20 racks into the room eventually --
> some space is still devoted to shelfmount towers since in our old room
> that's what we used and we plan to use (up) the systems until forced to
> retire them.  Note that if we DID put 18 2U cases per rack and had 20
> racks, we'd have 360 2U cases and 720 processors, drawing roughly 100W
> per process and hence would be "at" room capacity in other dimensions,
> so this is a pretty safe upper bound.  A better cost estimate for the
> space is therefore $150K/20 or $8000/m^2 in terms of practically usable
> space.  It cost us something like $200 per U in a RACK just for the
> floor space -- the actual cost of the case is considerably less than
> this.
> 
> This, of course, argues for duals in >>1U<< cases as being the most
> cost-beneficial, but a) we can't afford to run that many CPUs in the
> room anyway because of the power dimension; and b) in a 2U case, a dual
> runs "hot" but there is a tiny bit of thermal ballast associated with
> the case size.  In a 1U case a dual runs excruciatingly and dangerously
> hot.  Not dangerously hot in that the cases aren't necessarily
> well-engineered -- dangerously hot in that a FAILURE of that engineering
> (say a case fan) can cause a chain-reaction meltdown in a very short
> period of time because of that LACK of any thermal ballast, especially
> in a 1U node in the middle of a whole rack of 1U nodes that trap its
> surplus heat.  I just don't think it is quite as robust.
> 
> 1U singles vs 2U duals is a fairer comparison at the same power density,
> but a 1U case and a 2U case cost just about the same.
> 
> Obviously, any sort of shelfmount/tower packaging, which DOES INDEED
> reduce the relative cost advantage of duals (without quibbling about
> "best of pricewatch" prices for hardware vs "best price you're likely to
> get from a vendor you want to do business with" prices for hardware)
> won't permit us to come CLOSE the spec capacity of this rather expensive
> space, but the space does require us to use rackmount packaging and not
> towers and cheap shelving (and I am a LONG TIME FAN of towers and cheap
> shelving when they will suffice for you, don't get me wrong:-).
> 
> So you see, I was if anything generous by omitting the cost of the space
> entirely.
> 
> Besides (and I repeat) -- the REAL point of my overall reply is:
> 
>   Your Mileage May Vary
> 
> It is just (and forgive me, this isn't intended to be a flame) silly to
> make a statement like "dual packaging never makes sense" in a universe
> of applications and system installations filled with nonlinear
> constraints in cost-space (like the cost of the space in which the nodes
> must be located).  All your reanalysis of the cost/benefit below COULD
> show is that perhaps there are configurations for which is it six of one
> or half a dozen of the other (or for which single packaging even
> slightly wins because of the cost differential for CPUs -- although from
> MY local "trusted vendor" their current price is $310 for XP 2000's,
> $250 for 1900's, with fan, which is not tremendously different from the
> MP cost).
> 
> In other words, sure, sometimes single packaging makes sense.  It did in
> my LAST cluster purchase last year, when they still hadn't decided to
> renovate the server room for us.  So my last cluster is lovely towers on
> Home Depot shelving.
> 
> Sometimes it doesn't.  As in now, they did, and floor space is suddenly
> dear. And I'm the same guy running the same code, and sometimes even
> hand-build my systems out of component parts.  Think about how many
> other permutations of individual needs there are out there.  Some folks
> have never even heard of pricewatch and would only THINK of buying
> turnkey rackmount clusters or prebuilt systems, and in both of these
> cases I think you'd find that street price favors the dual, per CPU, by
> hundreds of dollars, as LABOR costs for ASSEMBLING a dual are ALSO
> roughly $50-100 less compared to two singles.
> 
> My point.
> 
> > to look at the difference between XP and MP proc costs currently a
> > difference of 90 bucks per cpu on pricewatch.  I think if you re-evaluate
> > your calcs based on that you get something like:
> >
> >
> >                      Single            DUAL
> > MB (w/NIC)            150                220
> > CPU (1900)            180                239 X 2
> > case                   50                 75 (50+ 25 some super PS and
> > cooling)
> > memory(512DDR)        125               125 *2
> >                    _________            __________
> >                      505                1023
> >                       X2
> >                      1010      vs       1023
> >
> > As far as I see it is awash price wise.  Most dual CPU AMD setups I have
> > seen in rack mounts exceed $2000 easy as they require exotic cooling
> > measures.  On the other hand, there will be some cost penalty associated
> > with switch costs having more single CPU's.
> >
> > I agree with your response to the application and effective throughput and
> I
> > think you have to run your program before you actually know how it will
> > perform on these things...looking at benchmarks can be very misleading.
> >
> > I have not seen any benchmarks on the dual AMD setups comparing them to
> > singles, have you?
> 
> I've DONE a rather large suite of benchmark test on dual AMDs, and
> actually posted them to the list back when I was testing them on a
> loaned (thanks ASL!) dual Thunder with two 1200 MHz Tbirds.  As one
> might expect, cpu bound processes run at full speed, and memory bound
> processes collide to a greater or lesser degree depending in fair detail
> on what you look at.  Dual streams definitely show some effects of
> memory bandwidth saturation.
> 
> > That was quite response you posted!
> 
> A legend in my own time, I am, although I've been relatively quiet in
> recent months;-)
> 
>    rgb
> 
> --
> Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
> Duke University Dept. of Physics, Box 90305
> Durham, N.C. 27708-0305
> Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu
> 
> 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 

-- 
Kevin Van Workum
www.tsunamictechnologies.com
ONLINE COMPUTER CLUSTERS

__/__ __/__ *
 /     /   /
/     /   /




More information about the Beowulf mailing list