[Beowulf] Upgrading to gigabit
Timo Mechler
mechti01 at luther.edu
Fri Apr 22 13:52:12 PDT 2005
Hi Jeff,
We are planning on running some simulations for physics on these for now,
but the rest is still somewhat open ended. Would a task such as this
warrant the use of gigabit? Given the price it might almost be worth it
either way. Thanks again.
-Timo
At 10:19 AM 4/22/2005, Jeffrey B. Layton wrote:
>Timo Mechler wrote:
>
>>Hi all,
>>
>>I will be setting up a small cluster using some older servers very
>>soon. They are IBM Netfinity 4000R's with 2x PIII 750mhz, 1024mb ram, 2
>>x 9.1 SCSI hdd, and 2 x 10/100mbit onboard LAN. Would we see a
>>performance increase by upgrading these to gigabit? Or would it be a
>>waste of time and resources? Thanks in advance for your help.
>
>
>Hah! I'm going to beat rgb to the gun even though I'll be writing
>considerably less :)
>
>What application(s) do you currently run on the cluster? This
>is the proverbial $64 question. If the applications are bottlenecked
>by networking then you could see a huge leap in performance.
>If the apps are bottlenecked by CPU performance, then you
>will not likely see a big boost in performance (although NFS
>performance would improve greatly using GigE).
> How many servers do you have? The 32-bit, PCI, Intel GigE
>NICs are pretty good and inexpensive (less than $30). Plus you
>could use something like an SMC 8505T (5-port) or 8508T
>(8-port) GigE switch that will give you jumbo frames, for a
>pretty low cost (the 8508T is about $100).
> There are other options as well. Since you have 2 FastE
>ports, you could play with an FNN (aggregate.org/FNN/). Or
>you could try channel bonding them.
> GigE is cheap enough that you could use channel bond GigE
>and use something like MP_Lite
>(www.scl.ameslab.gov/Projects/MP_Lite/) with your MPI codes
>to take advantage of the bonded interface (caution: not all
>MPI's can use channel-bonded GigE, but MP_Lite claims
>that they can. However, MP_Lite is a subset of MPI, but it
>captures the primary functions that most MPI applications use).
>
>Enjoy! (and don't be afraid to ask lots of questions here).
>
>Jeff
More information about the Beowulf
mailing list