[Beowulf] hpl size problems
Greg M. Kurtzer
gmkurtzer at lbl.gov
Thu Sep 22 21:15:21 PDT 2005
We (LBNL) just built up an almost identical configuration to yours
except that we have 4GB of RAM per node and the IB fabric is 3/1
blocking.
Our HPL run yielded 1516 Gflops and 83.4% efficiency. According to Dell,
this broke the efficiency records for a system of this configuration
(we like to think Warewulf had something to do with that ;).
I personally didn't do the HPL run, so I can't help with configuration.
The press release of our new system can be found at:
http://access.ncsa.uiuc.edu/Releases/09.19.05_Berkeley_L.html
We first purchased the cluster from Dell with their "supported
RedHat/Rocks" build. It took just over 2 days to get all the bugs
hammered out and running our scientific code (acceptance test). Once
that was done, we rebuilt the cluster completely with Warewulf (in 2
hours) running on Centos-3.5 and saw a pretty amazing speedup of the
scientific code (*over* 30% faster runtimes) then with the previous
RedHat/Rocks build. Warewulf also makes the cluster rather trivial to
maintain and customize (OK, enough of my evangelism).
We did find that symbol errors in the fabric are very common if anyone
"breathes" on the wire plant and cause drastic changes in performance.
On Thu, Sep 22, 2005 at 01:13:21PM -0400, Geoff Cowles wrote:
> We have a 128 node cluster running redhat/rocks comprised of dell
> 1850s with dual 3.4 GHz Xeons and 2 GB memory each. The interconnect
> is a 4x infiniband nonblocking fabric. HPL was built using Intel's
> mplpk MPP distribution which links the topspin mpich with the intel
> optimized em64t blas routines. When running hpl, we found that we
> were able to get decent but not great performance and we seem to be
> limited by problem size. We can reach about 1.1 Tflops with a full
> 256 processor run but the problem size where swap space begins to be
> utilized is very small, around 80K. With N=120K we are using all
> memory (real+virtual = 4MB) and the program crashes. Theoretically,
> with 256 GBytes of memory we should be able to use a problem size of
> around 150K, assuming the OS uses about 1/4 of the RAM. Similar
> clusters on the top500 list are able to obtain closer to 1.3 TBytes
> with a NMAX of around 150K.
>
> Any ideas?
>
> Thanks
>
> -Geoff
>
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Geoffrey W. Cowles, Research Scientist
> School for Marine Science and Technology phone: (508) 910-6397
> University of Massachusetts, Dartmouth fax:
> (508) 910-6371
> 706 Rodney French Blvd. email: gcowles at umassd.edu
> New Bedford, MA 02744-1221 http://codfish.smast.umassd.edu/
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
--
Greg Kurtzer
Berkeley Lab, Linux guy
More information about the Beowulf
mailing list