[Beowulf] building Infiniband 4x cluster questions

Joseph Han jhh3851 at yahoo.com
Mon Nov 7 15:44:41 PST 2011


To further complicate issue, if latency is the key driving factor for older hardware, I think that the chips with the Infinipath/Pathscale lineage tend to have lower latencies than the Mellanox Inifinihost line.  
When in the DDR time frame, I measured Infinipath ping-pong latencies 3-4x better than that of DDR Mellanox silicon.  Of course, the Infinipath silicon will require different kernel drivers than those from Mellanox (ipath versus mthca).  These were QLogic specific HCA's and not the rebranded Silverstorm HCA's sold by QLogic.  (Confused yet?)  I believe that the model number was QLogic 7240 for the DDR version and QLogic 7140 for the SDR one.
Joseph


Message: 2
Date: Mon, 07 Nov 2011 14:21:51 -0600
From: Greg Keller <Greg at Keller.net>
Subject: Re: [Beowulf] building Infiniband 4x cluster questions
To: beowulf at beowulf.org
Message-ID: <4EB83DDF.5020902 at Keller.net>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed


> Date: Mon, 07 Nov 2011 13:16:00 -0500
> From: Prentice Bisbal<prentice at ias.edu>
> Subject: Re: [Beowulf] building Infiniband 4x cluster questions
> Cc: Beowulf Mailing List<beowulf at beowulf.org>
> Message-ID:<4EB82060.3050300 at ias.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Vincent,
>
> Don't forget that between SDR and QDR, there is DDR.  If SDR is too
> slow, and QDR is too expensive, DDR might be just right.
And for DDR a key thing is, when latency matters, "ConnectX" DDR is much 
better than the earlier "Infinihost III" DDR cards.  We have 100's of 
each and the ConnectX make a large impact for some codes.  Although 
nearly antique now, we actually have plans for the ConnectX cards in yet 
another round of updated systems.  This is the 3rd Generation system I 
have been able to re-use the cards in (Harperton, Nehalem, and now 
Single Socket Sandy Bridge), which makes me very happy.  A great 
investment that will likely live until PCI-Gen3 slots are the norm.
--
Da Bears?!

> --
> Goldilocks
>
>
> On 11/07/2011 11:58 AM, Vincent Diepeveen wrote:
>> >  hi Prentice,
>> >
>> >  I had noticed the diff between SDR up to QDR,
>> >  the SDR cards are affordable, the QDR isn't.
>> >
>> >  The SDR's are all $50-$75 on ebay now. The QDR's i didn't find cheap
>> >  prices in that pricerange yet.
>> >
>> >  If i would want to build a network that's low latency and had a budget
>> >  of $800 or so a node of course i would
>> >  build a dolphin SCI network, as that's probably the fastest latency
>> >  card sold for a $675 or so a piece.
>> >
>> >  I do not really see a rival latency wise to Dolphin there. I bet most
>> >  manufacturers selling clusters don't use
>> >  it as they can make $100 more profit or so selling other networking
>> >  stuff, and universities usually swallow that.
>> >
>> >  So price total dominates the network. As it seems now infiniband 4x is
>> >  not going to offer enough performance.
>> >  The one-way pingpong latencies over a switch that i see of it, are not
>> >  very convincing. I see remote writes to RAM
>> >  are like nearly 10 microseconds for 4x infiniband and that card is the
>> >  only one affordable.
>> >
>> >  The old QM400's i have here are one-way pingpong 2.1 us or so, and
>> >  QM500-B's are plentyful on the net (of course big disadvantage: needs
>> >  pci-x),
>> >  which are a 1.3 us or so there and have SHMEM. Not seeing a cheap
>> >  switch for the QM500's though nor cables.
>> >
>> >  You see price really dominates everything here. Small cheap nodes you
>> >  cannot build if the port price, thanks to expensive network card,
>> >  more than doubles.
>> >
>> >  Power is not the real concern for now - if a factory already burns a
>> >  couple of hundreds of megawatts, a small cluster somewhere on the
>> >  attick eating
>> >  a few kilowatts is not really a problem:)
>> >



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20111107/cfed84e4/attachment.html>


More information about the Beowulf mailing list