[Beowulf] 96 Processors Under Your Desktop (fwd =?iso-8859-1?q?from=09brian-slashdotnews?=@hyperreal.org)
Michael Will
mwill at penguincomputing.com
Tue Aug 31 09:42:25 PDT 2004
There already is the opteron 246HE specced as 55W, that comes at twice the
cost of the standard opteron 246 which is specced as 70W.
AMD also announced a 246EE specced at 30W.
What is the range of per year cost for a dual opteron 1U for air conditioning and
power consumption?
Michael
On Monday 30 August 2004 04:30 pm, Glen Gardner wrote:
>
> I have been touting the virtues of low power use clusters for the last
> year. I hope to build a second one next year , with twice the
> performance of the present machine.
> My experience with my low power cluster has been that it is not a "big
> iron" machine, but is very effective, and very fast for some things.
> Also, a low power use cluster is the only way I can have a significant
> cluster in my apartment, so it was to be this way, or no way. At
> present, the cost of power for my 14 node cluster is running about $20 a
> month (14 nodes up 24/7 and in use much of the time).
>
> It is rather difficult to operate a significant opteron cluster in an
> office environment (or in an efficiency apartment). The heat alone will
> prevent it. If you need lots of nodes and low power use, the "small p
> performance" machines are going to be the way to go. I can think of
> many situations where it would be desirable to have a deskside cluster
> for computation, development, or testing, and the low power machines
> opens the door to a lot of users who can't otherwise take advantage of
> parallel processing.
> A 450 watt , 10 GFLP parallel computing machine for about $10K seems
> attractive. It is even more attractive if it does not need any special
> power or cooling arrangements.
>
>
> Glen
>
>
> Mark Hahn wrote:
>
> >>Transmeta 2) This is not shared memory setup, but ethernet connected. So
> >>
> >>
> >
> >yeah, just gigabit. that surprised me a bit, since I'd expect a trendy
> >product like this to want to be buzzword-compliant with IB.
> >
> >
> >
> >>Does anyone have any idea haw the Efficeon's stack up against Opterons?
> >>
> >>
> >
> >the numbers they give are 3Gflops (peak/theoretical) per CPU.
> >that's versus 4.8 for an opteron x50, or 10 gflops for a ppc970/2.5.
> >they mention 150 Gflops via linpack, which is about right, given
> >a 50% linpack "yield" as expected from a gigabit network.
> >
> >remember that memory capacity and bandwidth are also low for a typical
> >HPC cluster. perhaps cache-friendly things like sequence-oriented bio stuff
> >would find this attractive, or montecarlo stuff that uses small models.
> >
> >
> >
> >>A quad cpu opteron comes in at a similar price as Orion's 12 cpu unit,
> >>but the opeteon is a faster chips and has shared mem. The Orion DT-12
> >>lists a 16 Gflop linpack. Does anyone have quad Opteron linpack results?
> >>
> >>
> >
> >for a fast-net cluster, linpack=.65*peak. for vector machines, it's closer
> >to 1.0; for gigabit .5 is not bad. for a quad, I'd expect a yield better
> >than a cluster, but not nearly as good as a vector-super. guess .8*2.4*2*4=
> >.8*2.4*2*4=15 Gflops.
> >
> >(the transmeta chip apparently does 2 flops/cycle like p4/k8, unlike
> >the 4/cycle for ia64 and ppc.)
> >
> >I think the main appeal of this machine is tidiness/integration/support.
> >I don't see any justification for putting one beside your desk -
> >are there *any* desktop<=>cluster apps that need more than a single
> >gigabit link?
> >
> >for comparison, 18 Xserves would deliver the same gflops, dissipate
> >2-3x as much power, take up about twice the space.
> >
> >personally, I think more chicks would dig a stack of Xserves ;)
> >
> >_______________________________________________
> >Beowulf mailing list, Beowulf at beowulf.org
> >To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> >
> >
> >
>
--
Michael Will, Linux Sales Engineer
NEWS: We have moved to a larger iceberg :-)
NEWS: 300 California St., San Francisco, CA.
Tel: 415-954-2822 Toll Free: 888-PENGUIN
Fax: 415-954-2899
www.penguincomputing.com
More information about the Beowulf
mailing list