<br><font size=2 face="sans-serif">Dear Mark and the List,</font>
<br>
<br><font size=2 face="sans-serif">The head node is about a terabyte
of raid10, with home directories and application directories NFS mounted
to the cluster. I am still tuning NFS, 16 daemons now) and,
of course, the head node had 1Gb link to my intranet for remote cluster
access. </font>
<br>
<br><font size=2 face="sans-serif">The 10Gb link to the switch uses cx4
cable. It did not cost too much and I only needed two meters of it.
</font>
<br>
<br><font size=2 face="sans-serif">10 Gb is very nice and makes me lust
for inexpensive low latency 10Gb switches... but I'll wait for the marketplace
to develop. </font>
<br>
<br><font size=2 face="sans-serif">Engineering calculations (Fluent, Abacus)
can fill the 10 Gb link for 5 to 15 minutes. But that is better than
the 20-40 minutes they used to use. I suspect they are checkpointing and
restarting their MPI iterations. </font>
<br><font size=2 face="sans-serif">------<br>
Sincerely,<br>
<br>
Tom Pierce<br>
</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td width=40%><font size=1 face="sans-serif"><b>Mark Hahn <hahn@mcmaster.ca></b>
</font>
<p><font size=1 face="sans-serif">02/21/2007 09:48 AM</font>
<td width=59%>
<table width=100%>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">To</font></div>
<td valign=top><font size=1 face="sans-serif">Thomas H Dr Pierce <TPierce@rohmhaas.com></font>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">cc</font></div>
<td valign=top><font size=1 face="sans-serif">Beowulf Mailing List <beowulf@beowulf.org></font>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">Subject</font></div>
<td valign=top><font size=1 face="sans-serif">Re: [Beowulf] anyone using
10gbaseT?</font></table>
<br>
<table>
<tr valign=top>
<td>
<td></table>
<br></table>
<br>
<br>
<br><font size=2><tt>> I have been using the MYRICOM 10Gb card in my
NFS server (head node) for<br>
> the Beowulf cluster. And it works well. I have a inexpensive
3Com switch<br>
> (3870) with 48 1Gb ports that has a 10Gb port in it and I connect
the<br>
> NFS server to that port. The switch does have small fans in it.<br>
<br>
that sounds like a smart, strategic use. cx4, I guess. is the
head <br>
node configured with a pretty hefty raid (not that saturating a single<br>
GB link is that hard...)<br>
<br>
thanks, mark hahn.<br>
</tt></font>
<br>