[Beowulf] Re: MS Cray
Lux, James P
james.p.lux at jpl.nasa.gov
Thu Sep 18 08:59:10 PDT 2008
> -----Original Message-----
> From: beowulf-bounces at beowulf.org
> [mailto:beowulf-bounces at beowulf.org] On Behalf Of Robert G. Brown
> Sent: Thursday, September 18, 2008 7:22 AM
> To: Gus Correa
> Cc: Beowulf
> Subject: Re: [Beowulf] Re: MS Cray
>
> On Wed, 17 Sep 2008, Gus Correa wrote:
>
> > After I configured it with eight dual-slot quad-core Xeon E5472
> > (3.0GHz) compute nodes, 2GB/core RAM, IPMI, 12-port DDR IB switch
> > (their smallest), MS Windows installed, with one year standard 9-5
> > support, and onsite installation, the price was over $82k.
> > It sounds pricey to me, for an 8 node cluster.
> > Storage or viz node choices, 24-port IB to connect to other
> > enclosures, etc, are even more expensive.
>
> Again, excellently well put. This is literally the bottom
> line. What we are really talking about is form factor and
> who does what. People usually are pretty careful with their
> money, at least within their range of knowledge. When bladed
> systems first started coming out -- which was many years ago
> at this point -- I did a bit of an on-list CBA of them and
> concluded that there was a price premium of something like a
> factor of 2 for them, compared to the price of an equivalent
> stack of rackmounted nodes, more like 3 compared to a shelf
> full of tower units.
> I asked "why would anyone pay that"?
<snip of rgb's excellent description of infrastructure issues>
>
> This little exercise in the realities of infrastructure
> planning exposes the fallacy of the "desktop cluster" in MOST
> office environments, including research miniclusters in a lot
> of University settings. There exist spaces -- perhaps big
> labs, with their own dedicated climate control and lots of
> power -- where one could indeed plug right in and run, but
> your typical office or cubicle is not one of them. Those
> same spaces have room for racks, of course, if they have room
> for a high density blade chassis.
>
> If you already have, or commit to building, an infrastructure
> space with rack room, real AC, real power, you have to look
> SERIOUSLY at whether you want to pay the price premium for
> small-form factor solutions. But that premium is a lot
> smaller than it was eight or so years ago, and there ARE
> places with that proverbial broom closet or office that is
> the ONLY place one can put a cluster. For them, even with
> the relatively minor renovations needed to handle 3-4 KW in a
> small space, it might well be worth it.
>
I suspect that there is some non-negligible demand for these boxes, notwithstanding the high cost. (esp viewed in terms of keeping the mfr line for the product going.. Not like either Cray or MS is depending on these sales to keep the company alive)
How about as an "executive toy" for the guy in the corner office running financial models? (I am a Master of the Universe, and I must have my special data entirely under my control.)
How about in places where the organizational pain that comes with being in the "machine room" is high? (All systems in the main computer room shall be under the cognizance of Senior VicePresident of MachineRoom Operations Smith. SVP Smith dictates: All systems in the machine room shall be made available to all users so as to efficiently allocate computational resources, since my bonus depends on reducing the metric of "idle time percentage". SVP of IT Security Jones: All shared computational resources shall use the corporate standard software disk encryption and must run both McAfee and Symantec AntiVirus in continuous scan mode. SVP of Network Management Wilson: In order to achieve maximum commonality and facilitate continuing reuse of computing assets purchased in 1991, all computers shall provide a connection of 10Base2 Ethernet at 2 Mbps. SVP of CustomerProprietaryInformationSecurity Brown: All systems in the machineroom shall use the corporate secure SAN. And so it goes..)
I think the model that John Vert mentioned, using it as a software development workstation to try things out before running on the "big iron" is actually probably a more likely scenario. And for that, you might not want the full up configuration, just enough to make it a "real" cluster so you can work out the interprocessor communications issues.
Jim
More information about the Beowulf
mailing list