Estimating Cluster Performance
Gary Huntress
ghuntress at mediaone.net
Fri Oct 6 21:43:03 PDT 2000
Hi Everyone,
I am researching a proposal to build a cluster. Its primary purpose in life will be to perform FFTs (spectragrams) on large acoustic records. I have tentatively chosen the DEC DS10L with a 466 MHz 21264 processor because of its FPU performance and its form factor.
Unfortunately, I can't rush out and buy 40 DS10L's in order to see how fast I can create spectragrams. So I'm trying to come up with a reasonable estimate. I know that the DS10L is currently 466 MHz and will be at 600 Mhz before I buy them so I have a built in fudge factor already :)
I plan on using the FFTW package from http://www.fftw.org and they have compiled some benchmarks at http://www.fftw.org/benchfft/results/alpha467.html. That leads me to believe that one box can basically sustain 200 MFLOPs. Now, I am making an assumption that distributing FFTs among nodes is very balanced (scaleable? parallelizable? whats the term I'm looking for here?) and therefore it will scale well (note, I am not trying to do a single FFT distributed among nodes, I am breaking up the stream into blocks). This would be (best case) 40 * 200 = 8000 MFLOPS, and I'll use a scaling factor of 0.8 and call it an aggregate performance of 6,400 MFLOPS.
Is this completely specious reasoning? Am I completely out of the ballpark?
Regards,
Gary Huntress
former owner of "Beosaurus"
http://superid.virtualave.net/beowulf.htm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20001007/b1155d56/attachment.html>
More information about the Beowulf
mailing list