[Beowulf] scaling / file serving
Lombard, David N
david.n.lombard at intel.com
Thu Jun 10 15:56:05 PDT 2004
From: Joe Landman
>
>On Wed, 2004-06-09 at 20:02, Patrice Seyed wrote:
>> Hi,
>>
>> A current cluster with 15 nodes/ 30 processors mainly used for batch
>> computing has one head/management node that maintains scheduling
services
>as
>> well as home directories for users. The cluster is due for an upgrade
>that
>> will increase the number of compute nodes to about 100. I'm
considering
>> breaking out one of the compute nodes, adding disks, and making it a
>> storage/file server node.
>
[deletia]
>
>The other issue is local disk. There are some folks absolutely
>horrified at the prospect of a cluster node having a local disk. Makes
>management harder. Then again, for each IDE channel and reasonably
>modern disk, you can get 40-50 MB/s of read and about 33 MB/s write
>performance. So if you have a nice RAID0 stripe across 2 different IDE
>channels (remember that cluster vendors, *different*) IDEs, you can
pull
>80+ MB/s reads and 60+ MB/s writes per node (one recent IBM 325 Opteron
>based system I put together hit 120MB/s sustained reads on a large
>Abaqus job, and about 90 MB/s sustained writes). So if you can set
your
>scratch to run off of the local RAID0, you can get some serious
>performance versus a network based file system. Of course some folks
>would prefer to spend and extra $500 per node on 15k RPM SCSI to get
...
>60 MB/s on writes and 80 MB/s on reads.
Some apps do quite well with {P,S}ATA RAID; others choke on it. As
always, it depends. Also, I haven't seen 15k provide better b/w than
10k, but...
Also, sustained (>> memory size) SCSI or FC numbers 3x - 6+x above are
reasonable expectations for RAID0 with the right (application-dependent)
I/O size, fs config, and h/w. This isn't cheap, but sometimes a
requirement is a requirement.
Relative to the original request about NFS, I was surprised at this, but
I've listened to sysadmins rail against ATA and NFS claiming poor
performance, but I have no actual experience supporting this nor were
they able to provide specifics, so this is only hearsay. But, once
again, test, don't assume.
--
David N. Lombard
My comments represent my opinions, not those of Intel Corporation.
More information about the Beowulf
mailing list