[Beowulf] anyone using 10gbaseT?
Thomas H Dr Pierce
TPierce at rohmhaas.com
Wed Feb 21 07:45:03 PST 2007
Dear Mark and the List,
The head node is about a terabyte of raid10, with home directories and
application directories NFS mounted to the cluster. I am still tuning
NFS, 16 daemons now) and, of course, the head node had 1Gb link to my
intranet for remote cluster access.
The 10Gb link to the switch uses cx4 cable. It did not cost too much and
I only needed two meters of it.
10 Gb is very nice and makes me lust for inexpensive low latency 10Gb
switches... but I'll wait for the marketplace to develop.
Engineering calculations (Fluent, Abacus) can fill the 10 Gb link for 5 to
15 minutes. But that is better than the 20-40 minutes they used to use. I
suspect they are checkpointing and restarting their MPI iterations.
------
Sincerely,
Tom Pierce
Mark Hahn <hahn at mcmaster.ca>
02/21/2007 09:48 AM
To
Thomas H Dr Pierce <TPierce at rohmhaas.com>
cc
Beowulf Mailing List <beowulf at beowulf.org>
Subject
Re: [Beowulf] anyone using 10gbaseT?
> I have been using the MYRICOM 10Gb card in my NFS server (head node) for
> the Beowulf cluster. And it works well. I have a inexpensive 3Com
switch
> (3870) with 48 1Gb ports that has a 10Gb port in it and I connect the
> NFS server to that port. The switch does have small fans in it.
that sounds like a smart, strategic use. cx4, I guess. is the head
node configured with a pretty hefty raid (not that saturating a single
GB link is that hard...)
thanks, mark hahn.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20070221/7ec332e5/attachment.html>
More information about the Beowulf
mailing list