[Beowulf] copying data between clusters
Joe Landman
landman at scalableinformatics.com
Fri Mar 5 08:27:22 PST 2010
kyron wrote:
> Given I haven't seen single 20TB drives out there yet, I doubt it to be
> the case. I wouldn't throw in NFS as a limiting factor (just yet) as I have
I was commenting on the 30 MB/s figure. Not whether or not he had 20TB
attached to it (though if he did ... that would be painful).
> been able to have sustained 250MB/s data transfer rates (2xGigE using
> channel bonding). And this figure is without jumbo frames so I do have some
> protocol overhead loss. The sending server is a PERC 5/i raid with
> 4*300G*15kRPM drives while the receiving well...was loading onto RAM ;)
We are getting sustained 1+GB/s over 10GbE with NFS on a per unit basis.
For IB its somewhat faster. Backing store is able to handle this
easily.
I think Michael may be thinking about the performance of a single node
GbE or IDE rather than the necessary r/w performance to populate 20+ TB
of data for data motion.
>
>
> Eric Thibodeau
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf
mailing list