[Beowulf] how large can we go with 1GB Ethernet? / Re: how large of an installation have people used NFS, with?

Jaime Requinton Jaime at servepath.com
Wed Sep 9 19:01:00 PDT 2009


Can you use this switch?  You won't lose a port for uplink since it has fiber and/or copper uplink ports.

Just my 10 cents...

Forgot to paste the link:  http://www.bestbuy.com/site/olspage.jsp?skuId=8891915&type=product&id=1212192931527&ref=06&loc=01&ci_src=14110944&ci_sku=8891915



-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Jaime Requinton
Sent: Wednesday, September 09, 2009 3:12 PM
To: Mike Davis; psc
Cc: beowulf at beowulf.org
Subject: RE: [Beowulf] how large can we go with 1GB Ethernet? / Re: how large of an installation have people used NFS, with?

Can you use this switch?  You won't lose a port for uplink since it has fiber and/or copper uplink ports.

Just my 10 cents...


-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Mike Davis
Sent: Wednesday, September 09, 2009 2:10 PM
To: psc
Cc: beowulf at beowulf.org
Subject: Re: [Beowulf] how large can we go with 1GB Ethernet? / Re: how large of an installation have people used NFS, with?

psc wrote:
> I wonder what would be the sensible biggest cluster possible based on
> 1GB Ethernet network . And especially how would you connect those 1GB
> switches together -- now we have (on one of our four clusters) Two 48
> ports gigabit switches connected together with 6 patch cables and I just
> ran out of ports for expansion and wonder where to go from here as we
> already have four clusters and it would be great to stop adding cluster
> and start expending them beyond number of outlets on the switch/s ....
> NFS and 1GB Ethernet works great for us and we want to stick with it ,
> but we would love to find a way how to overcome the current "switch
> limitation".   ... I heard that there are some "stackable switches" ..
> in any case -- any idea , suggestion will be appreciated.
>
> thanks!!
> psc
>
>   
When we started running clusters in 2000 we made the decision to use a 
flat networking model and a single switch if at all possible, We use 144 
and 160 port Gig e switches for two of our clusters. The overall 
performance is better and the routing less complex.  Larger switches are 
available as well.

We try to go with a flat model as well for Infiniband. Right now we are 
using a 96 port Infiniband switch. When we additional nodes to that 
cluster we will either move up to a 144 or 288 port chassis. Running the 
numbers I found the cost of the large chassis to be on par with the 
extra switches required to network using 24 or 36 port switches.


-- 
Mike Davis			Technical Director
(804) 828-3885			Center for High Performance Computing
jmdavis1 at vcu.edu		Virginia Commonwealth University

"Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity."  George S. Patton

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list