new SGI Origin & Onyx 3k?

W Bauske wsb at paralleldata.com
Wed Jul 26 21:24:14 PDT 2000


"Morton, Scott" wrote:
> 
> Wes Bauske wrote:
> >Note the "Like to have". Also, you don't address the first
> >question so I'll assume you agree people don't do that unless
> >it's for benchmarks from your statement.
> 
> >Benchmarks at that scale are for bragging rights. Similar to
> >IBM's news releases on ASCI White. Amusing but it doesn't
> >relate to production runs.
> 
> At Amerada Hess (a medium-sized oil company), we have a mildly hetrogeneous
> beowulf with over 350 cpus, and I develop seismic imaging codes which I run
> routinely on over 300 cpus.  These _aren't_ benchmarks.
> 
> While we aren't at 1000 cpus (yet), we're clearly well above the 4-128 cpu
> range.
> 

OK. I'll go for the same question as the last posts.
What cpus are you running? State of the art or a generation
or two back? It makes a big difference on what level of
node count you need to solve a given problem. I solve Terabyte
and larger size seismic problems on less than 100 state of the 
art processors. You would need 2-8 times that number of nodes
if you're a generation or more back.

My point in all this is that I know the sorts of nodes Greg
uses and they are state of the art. So, when he says he's
using 1000+ nodes, he's describing on the order of 1.5 TFLOP of peak
performance. How many people would routinely solve single
problems using a TFLOP class cluster? Using 128 state of the
art processors, you're looking at 192 GFLOPS. How many people
routinely use 192 GFLOPS or more to solve a single problem at 
a time?


Wes




More information about the Beowulf mailing list