[Beowulf] evaluating FLOPS capacity of our cluster
Gus Correa
gus at ldeo.columbia.edu
Mon May 11 16:22:00 PDT 2009
>>> All 64 bit machines with a dual channel
>>> bonded Gigabit ethernet interconnect. AMD Quad-Core AMD Opteron(tm)
>>> Processor 2354.
>>
>> As others have said, 50% is a more likely HPL efficiency for a large GigE
> cluster, but with your smallish cluster (24 nodes) and bonded channels,
> you would probably get closer to 80% than 50%.
>
> Thank you.
> That clarifies things a bit.
> Are "bonded channels" what you get in a single switch?
> So, it is "small is better", right? :)
> How about Infiniband, would the same principle apply,
> a small cluster with a single switch being more efficient than a large
> one with stacked switches?
>
Hi Rahul, list
Oops, I misunderstood what you said.
I see now. You are bonding channels on the your nodes' dual GigE
ports to double your bandwidth, particularly for MPI, right?
I am curious about your results with channel bonding.
OpenMPI claims to work across two or more networks without the need
for channel bonding.
What MPI do you use?
In any case, the single 24-48 port GigE switches (if of good brand)
should have a single flat latency time between any pair of ports, right?
Whereas on a larger cluster, with stacked switches, the latency will
be different (and larger) for different pairs of nodes/ports, I presume.
This may be the main reason why large installations don't perform
as good as small clusters (at least in terms of HPL Rmax/Rpeak metrics),
or not?
Gus Correa
More information about the Beowulf
mailing list