[Warewulf] Re: [Beowulf] hpl size problems
Andrew Piskorski
atp at piskorski.com
Mon Sep 26 12:43:00 PDT 2005
On Mon, Sep 26, 2005 at 10:29:31AM -0700, Greg M. Kurtzer wrote:
> On Sat, Sep 24, 2005 at 12:10:46PM -0400, Mark Hahn wrote:
> If someone else also has thoughts as to what would have caused the
> speedup, I would be very interested.
>
> > > hours) running on Centos-3.5 and saw a pretty amazing speedup of the
> > > scientific code (*over* 30% faster runtimes) then with the previous
> > > RedHat/Rocks build. Warewulf also makes the cluster rather trivial to
> >
> > such a speedup is indeed impressive; what changed?
>
> Actually, we used the same kernel (recompiled from RHEL), and exactly the
> same compilers, mpi and IB (literally the same RPMS). The only thing
> that changed was the cluster management paradigm. The tests were done
> back to back with no hardware changes.
Check me please if this is correct, as I am not familiar with HPL: The
HPL benchmark depends on all the nodes progressing in lockstep, and if
any one node takes longer, than all the others must wait until the
slow node catches up, right? (That's called a barrier.) And those
barriers occur frequently, at relatively short time intervals, right?
If those assumptions are correct, then without knowing more, I would
wager heavily that the 30+% Warewulf speedup is due primarily to
eliminating a bunch of unnecessary daemons on the slave nodes. (Just
how many were there with the original setup?)
It's amusing that Mark Hahn is already participating in this thread,
because his post to the Beowulf list gave a link explaining a detailed
real-world example of that effect very nicely:
http://www.beowulf.org/archive/2005-July/013215.html
http://www.sc-conference.org/sc2003/paperpdfs/pap301.pdf
Basically, daemons cause interrupts which are not synchronized across
nodes, which causes lots of variation in barrier latency across the
nodes - AKA, jitter. And with barrier-heavy code, lots of jitter
causes disastrous performance. On the 8192 processor ASCI Q, they saw
a FACTOR OF TWO performance loss due to those effects...
So maybe, consider yourself lucky that your pre-Warewulf cluster was
managing to run at 77% of the speed it is should have been running at.
And maybe you can make it go faster yet. :)
--
Andrew Piskorski <atp at piskorski.com>
http://www.piskorski.com/
More information about the Beowulf
mailing list