>2 p4 processor systems
Robert G. Brown
rgb at phy.duke.edu
Fri Aug 30 05:46:56 PDT 2002
On Thu, 29 Aug 2002, Joel Jaeggli wrote:
> On Thu, 29 Aug 2002, Steve Cousins wrote:
>
> >
> > > > If the machines that you are talking about really are 6-Way SMP nodes,
> > > > what are they?
> > >
> > > afaikt, these are machines based on the serverworks HE chipset.
> >
> > I just got an email from the original poster and he says that the machine
> > his management people were thinking of was in fact the Western Scientific
> > machine which has three dual-CPU nodes, complete with three disks, and six
> > 10/100 interfaces in 1U.
> >
> > Has anyone made a cluster with these? If so, how bad is the heat
> > problem? Anyone have a real price for these?
>
> 6 x ~30watts (1ghz piii tulatin) = 180 watts... bad but not out of
> control. modula the fact that three mainboards probably have more crap
> sticking off them to interfere with airflow than one mainboard.
Don't forget the rest of the computer. Lots of sources on the web,
e.g.:
http://www.blueowltechnologies.com/pmConsumption.asp
So,
Plus motherboard 3 x ~20W = 60 W
memory 3 x ~10W = 30 W
disks 3 x ~12W = 36 W
NIC ~ 5W = 5 W
(cpus) = 180W
================================
Total > 300 Watts
not including the heating of the power supply itself (which might raise
the actual power consumption to close to 400W for purposes of estimating
supply requirements, so the estimate below is really relatively
favorable). That is, you won't do a LOT better than 3x the power
consumption of a standard dual PIII, which I would have estimated at
about 100W -- not that much can be shared. And this might well
underestimate memory and motherboard power consumption, and leaves off
as much as another 15-20W if one puts in a video card, a NIC per CPU --
even a bunch of fans add their own power consumption to the total heat
they are helping to remove.
And yes, all in a 1U case, where anything at all on the motherboards
(heat sinks, memory dimms, cables) that sticks up interferes with
airflow and can create spots where heat accumulates and drives up
temperatures.
You could fry up bacon and eggs, extra crispy, on top of the case if the
fans ever stopped. Probably make a lovely coffee warmer even with the
fans on, especially right over the CPUs. Stacking them tight in 40U ,
you'd reach an astounding 12,000-16,000 watts (6-8 kW/meter^3), which
would likely require at least 20,000 kVA to supply even with PFC power
supplies or a HMT supplying the room, more with ordinary supplies.
Something like 10-12 20A circuits. You would probably need to keep
ambient air down in the 50's or cooler, and would need the whole room
rigged with a thermal kill set at maybe 65-70F.
And I thought our 2U dual athlon stacks were a bit warm...;-)
Seriously, it is folks like this that should consider the Transmeta
blade computers. They are a bit slower/CPU but consume MUCH less
power/m^3 and can achieve very high OPS/m^3 densities.
rgb
>
> joelja
>
> > Steve
> > _____________________________________________________________
> > Steve Cousins Email: cousins at umit.maine.edu
> > Research Associate Phone: (207) 581-4302
> > Ocean Modeling Group
> > School of Marine Sciences 208 Libby Hall
> > University of Maine Orono, Maine 04469
> >
> >
> >
> > > serverworks has a very sparse/messy/wrong website, but on
> > > http://www.serverworks.com/products/matrix.html
> > > they claim to support 6 PIII's. they also claim to provide
> > > 4.1 GB/s, but I think that's merely a marketroid's dream:
> > > I'm guessing all 6 CPUs are on 1 or two FSB100 or 133 bus(es),
> > > and therefore you're only ever going to see about 1 GB/s.
> > >
> > > 6 is such an odd number (pardon) - I wonder if it's the Intel (Corrolary)
> > > Profusion chipset, which actually goes up to 8 PIII's. again, the
> > > CPUs are going to be crammed onto a pitifully slow shared FSB,
> > > and performance is going to hurt.
> > >
> > > HP apparently made boxes with both approaches. the NetServer LH6000
> > > seems to have been the wacky SW-HE chipset. it's DEFINITELY not 1U,
> > > though, or even close.
> > >
> > > in short, these big-way PIII SMP machines seem to be based on the
> > > premise that your application will fit entirely in the large private
> > > caches that PIII/xeons had, and that your main performance criterion
> > > is to stick lots of nics in lots of separate PCI buses with lots
> > > of disks. in short, the CPU doesn't do much except route DMAs,
> > > and you're willing to pay big for an impressive box.
> > >
> > > pretty much the antithesis of beowulf, I'd say ;)
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> >
>
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list