[Beowulf] building Infiniband 4x cluster questions
Vincent Diepeveen
diep at xs4all.nl
Mon Nov 7 15:25:56 PST 2011
Yeah well i'm no expert there what pci-x adds versus pci-e.
I'm on a budget here :)
I just test things and go for the fastest. But if we do theoretic
math, SHMEM is difficult to beat of course.
Google for measurements with shmem, not many out there.
Fact that so few standardized/rewrote their floating point software
to gpu's, is already saying enough about
all the legacy codes in HPC world :)
When some years ago i had a working 2 cluster node here with QM500-
A , it had at 32 bits , 33Mhz pci long sleeve slots
a blocked read latency of under 3 us is what i saw on my screen. Sure
i had no switch in between it. Direct connection
between the 2 elan4's.
I'm not sure what pci-x adds to it when clocked at 133Mhz, but it
won't be a big diff with pci-e.
PCI-e probably only has a bigger bandwidth isn't it?
Beating such hardware 2nd hand is difficult. $30 on ebay and i can
install 4 rails or so.
Didn't find the cables yet though...
So i don't see how to outdo that with old infiniband cards which are
$130 and upwards for the connectx, say $150 soon, which would allow
only single rail
or maybe at best 2 rails. So far didn't hear anyone yet who has
more than single rail IB.
Is it possible to install 2 rails with IB?
So if i use your number in pessimistic manner, which means that there
is some overhead of pci-x,
then the connectx type IB, can do 1 million blocked reads per second
theoretic with 2 rails. Which is $300 or so, cables not counted.
Quadrics QM500 is around 2 million blocked reads per second for 4
rails @ $120 , cables not counted.
Copper cables which have a cost of around 100 ns each 10 meters, if i
use 1/3 of lightspeed for electrons in copper,
those costs also are kept low with short cables.
On Nov 7, 2011, at 11:07 PM, Gilad Shainer wrote:
> RDMA read is a round trip operation and it is measured from host
> memory to host memory. I doubt if Quadrics had half of it for round
> trip operations measured from host memory to host memory. The PCI-X
> memory to card was around 0.7 by itself (one way)....
>
> Gilad
>
>
> -----Original Message-----
> From: beowulf-bounces at beowulf.org [mailto:beowulf-
> bounces at beowulf.org] On Behalf Of Vincent Diepeveen
> Sent: Monday, November 07, 2011 12:33 PM
> To: Greg Keller
> Cc: beowulf at beowulf.org
> Subject: Re: [Beowulf] building Infiniband 4x cluster questions
>
> hi Greg,
>
> Very useful info! I already was wondering about the different
> timings i see for infiniband, but indeed it's the ConnectX that
> scores better in latency.
>
> $289 on ebay but that's directly QDR then.
>
> "ConnectX-2 Dual-Port VPI QDR Infiniband Mezzanine I/O Card for
> Dell PowerEdge M1000e-Series Blade Servers"
>
> This 1.91 microseconds for a RDMA read is for a connectx. Not bad
> for Infiniband.
> Only 50% slower in latency than quadrics which is pci-x of course.
>
> Yet now needed is a cheap price for 'em :)
>
> It seems indeed all the 'cheap' offers are the infinihost III DDR
> versions.
>
> Regards,
> Vincent
>
> On Nov 7, 2011, at 9:21 PM, Greg Keller wrote:
>
>>
>>> Date: Mon, 07 Nov 2011 13:16:00 -0500
>>> From: Prentice Bisbal<prentice at ias.edu>
>>> Subject: Re: [Beowulf] building Infiniband 4x cluster questions
>>> Cc: Beowulf Mailing List<beowulf at beowulf.org>
>>> Message-ID:<4EB82060.3050300 at ias.edu>
>>> Content-Type: text/plain; charset=ISO-8859-1
>>>
>>> Vincent,
>>>
>>> Don't forget that between SDR and QDR, there is DDR. If SDR is too
>>> slow, and QDR is too expensive, DDR might be just right.
>> And for DDR a key thing is, when latency matters, "ConnectX" DDR is
>> much better than the earlier "Infinihost III" DDR cards. We have
>> 100's of each and the ConnectX make a large impact for some codes.
>> Although nearly antique now, we actually have plans for the ConnectX
>> cards in yet another round of updated systems. This is the 3rd
>> Generation system I have been able to re-use the cards in (Harperton,
>> Nehalem, and now Single Socket Sandy Bridge), which makes me very
>> happy. A great investment that will likely live until PCI-Gen3 slots
>> are the norm.
>> --
>> Da Bears?!
>>
>>> --
>>> Goldilocks
>>>
>>>
>>> On 11/07/2011 11:58 AM, Vincent Diepeveen wrote:
>>>>> hi Prentice,
>>>>>
>>>>> I had noticed the diff between SDR up to QDR, the SDR cards are
>>>>> affordable, the QDR isn't.
>>>>>
>>>>> The SDR's are all $50-$75 on ebay now. The QDR's i didn't find
>>>>> cheap prices in that pricerange yet.
>>>>>
>>>>> If i would want to build a network that's low latency and had a
>>>>> budget of $800 or so a node of course i would build a dolphin
>>>>> SCI
>>>>> network, as that's probably the fastest latency card sold for a
>>>>> $675 or so a piece.
>>>>>
>>>>> I do not really see a rival latency wise to Dolphin there. I bet
>>>>> most manufacturers selling clusters don't use it as they can
>>>>> make
>>>>> $100 more profit or so selling other networking stuff, and
>>>>> universities usually swallow that.
>>>>>
>>>>> So price total dominates the network. As it seems now infiniband
>>>>> 4x is not going to offer enough performance.
>>>>> The one-way pingpong latencies over a switch that i see of it,
>>>>> are
>>>>> not very convincing. I see remote writes to RAM are like nearly
>>>>> 10 microseconds for 4x infiniband and that card is the only one
>>>>> affordable.
>>>>>
>>>>> The old QM400's i have here are one-way pingpong 2.1 us or so,
>>>>> and
>>>>> QM500-B's are plentyful on the net (of course big disadvantage:
>>>>> needs
>>>>> pci-x),
>>>>> which are a 1.3 us or so there and have SHMEM. Not seeing a cheap
>>>>> switch for the QM500's though nor cables.
>>>>>
>>>>> You see price really dominates everything here. Small cheap nodes
>>>>> you cannot build if the port price, thanks to expensive network
>>>>> card, more than doubles.
>>>>>
>>>>> Power is not the real concern for now - if a factory already
>>>>> burns
>>>>> a couple of hundreds of megawatts, a small cluster somewhere on
>>>>> the attick eating a few kilowatts is not really a problem:)
>>>>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
>> Computing To change your subscription (digest mode or unsubscribe)
>> visit http://www.beowulf.org/mailman/listinfo/beowulf
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing To change your subscription (digest mode or unsubscribe)
> visit http://www.beowulf.org/mailman/listinfo/beowulf
>
More information about the Beowulf
mailing list