[Beowulf] building Infiniband 4x cluster questions
Vincent Diepeveen
diep at xs4all.nl
Mon Nov 7 12:33:52 PST 2011
hi Greg,
Very useful info! I already was wondering about the different timings
i see for infiniband,
but indeed it's the ConnectX that scores better in latency.
$289 on ebay but that's directly QDR then.
"ConnectX-2 Dual-Port VPI QDR Infiniband Mezzanine I/O Card for Dell
PowerEdge M1000e-Series Blade Servers"
This 1.91 microseconds for a RDMA read is for a connectx. Not bad for
Infiniband.
Only 50% slower in latency than quadrics which is pci-x of course.
Yet now needed is a cheap price for 'em :)
It seems indeed all the 'cheap' offers are the infinihost III DDR
versions.
Regards,
Vincent
On Nov 7, 2011, at 9:21 PM, Greg Keller wrote:
>
>> Date: Mon, 07 Nov 2011 13:16:00 -0500
>> From: Prentice Bisbal<prentice at ias.edu>
>> Subject: Re: [Beowulf] building Infiniband 4x cluster questions
>> Cc: Beowulf Mailing List<beowulf at beowulf.org>
>> Message-ID:<4EB82060.3050300 at ias.edu>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> Vincent,
>>
>> Don't forget that between SDR and QDR, there is DDR. If SDR is too
>> slow, and QDR is too expensive, DDR might be just right.
> And for DDR a key thing is, when latency matters, "ConnectX" DDR is
> much
> better than the earlier "Infinihost III" DDR cards. We have 100's of
> each and the ConnectX make a large impact for some codes. Although
> nearly antique now, we actually have plans for the ConnectX cards
> in yet
> another round of updated systems. This is the 3rd Generation system I
> have been able to re-use the cards in (Harperton, Nehalem, and now
> Single Socket Sandy Bridge), which makes me very happy. A great
> investment that will likely live until PCI-Gen3 slots are the norm.
> --
> Da Bears?!
>
>> --
>> Goldilocks
>>
>>
>> On 11/07/2011 11:58 AM, Vincent Diepeveen wrote:
>>>> hi Prentice,
>>>>
>>>> I had noticed the diff between SDR up to QDR,
>>>> the SDR cards are affordable, the QDR isn't.
>>>>
>>>> The SDR's are all $50-$75 on ebay now. The QDR's i didn't find
>>>> cheap
>>>> prices in that pricerange yet.
>>>>
>>>> If i would want to build a network that's low latency and had a
>>>> budget
>>>> of $800 or so a node of course i would
>>>> build a dolphin SCI network, as that's probably the fastest
>>>> latency
>>>> card sold for a $675 or so a piece.
>>>>
>>>> I do not really see a rival latency wise to Dolphin there. I
>>>> bet most
>>>> manufacturers selling clusters don't use
>>>> it as they can make $100 more profit or so selling other
>>>> networking
>>>> stuff, and universities usually swallow that.
>>>>
>>>> So price total dominates the network. As it seems now
>>>> infiniband 4x is
>>>> not going to offer enough performance.
>>>> The one-way pingpong latencies over a switch that i see of it,
>>>> are not
>>>> very convincing. I see remote writes to RAM
>>>> are like nearly 10 microseconds for 4x infiniband and that card
>>>> is the
>>>> only one affordable.
>>>>
>>>> The old QM400's i have here are one-way pingpong 2.1 us or so, and
>>>> QM500-B's are plentyful on the net (of course big disadvantage:
>>>> needs
>>>> pci-x),
>>>> which are a 1.3 us or so there and have SHMEM. Not seeing a cheap
>>>> switch for the QM500's though nor cables.
>>>>
>>>> You see price really dominates everything here. Small cheap
>>>> nodes you
>>>> cannot build if the port price, thanks to expensive network card,
>>>> more than doubles.
>>>>
>>>> Power is not the real concern for now - if a factory already
>>>> burns a
>>>> couple of hundreds of megawatts, a small cluster somewhere on the
>>>> attick eating
>>>> a few kilowatts is not really a problem:)
>>>>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list