[Beowulf] Re: switching capacity terminology confusion

Gerry Creager gerry.creager at tamu.edu
Thu Sep 17 20:58:52 PDT 2009


Rahul Nabar wrote:
> On Wed, Sep 16, 2009 at 11:28 AM, Gerry Creager <gerry.creager at tamu.edu> wrote:
>> silicon, if I recall correctly.  I've several S50s in my data center, hammer
>> the fool out of them, and am happy.
> 
> Thanks Gerry! I have been getting many great reviews on Force10. Maybe
> I will seriously consider them.
> 
>> Prior to them, we used Foundry
>> EdgeIron1G switches for our gigabit-connected clusters.  They worked well.
>>  For our newer gigabit-connected cluster we went with the HP 5412zl, and
>> have been happy.
>>
>> I'd not recommend cheap switches: They can bite you if you go too cheap and
>> result in poor MPI and I/O performance.
> 
> On the other end of the spectrum is Cisco. Their gear seems at such a
> huge $$ premium with respect to the other vendors and when I ask why
> the best answer I get is "Cisco is the market leader in switches".
> They won't show me which of their parameters make a Cisco switch
> better than the rest.

With the POSSIBLE exception of the newer Nexus line from Cisco, I can't 
think of a reason I'd put a Cisco-labeled switch in my data center... 
except for a Linksys for non-critical applications.
-- 
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843



More information about the Beowulf mailing list