[Beowulf] Re: High performance storage with GbE?
Guy Coates
gmpc at sanger.ac.uk
Fri Dec 15 02:36:29 PST 2006
Steve Cousins wrote:
> Again, if anyone can provide a real-world layout of what they are using,
> along with real-world speeds at the node, that would help out tremendously.
I can give you the detail on our current lustre setup (HP SFS v2.1.1),
We have 10 lustre servers (2 MDS servers and 8 OSTs). Each server is a Dual
3.20GHz Xeon server with 4 GB RAM, and has a single SFS20 scsi storage array
attached to it. The array has 12 SAS disks in a raid6 configuration.
(The arrays are actually dual-homed, so if a server fails the storage and
OST/MDS service can failed over to another server).
We have 560 clients. The interconnect is GigE at the edge (single GigE to the
client) and 2x10GigE at the core.
Large file read/write from a single client can fill the GigE pipe quite happily.
Aggregate performance is also excellent. We have achieved 1.5 Gbytes/s (12
Gbits/s) in production with real code. The limiting factor appears to be the
scsi controllers, which max out at ~170 Mbytes/second.
As has previously been mentioned, small file / metadata performance is not
great. A single client can do ~500 file creates per second, ~1000 deletes per
second and ~1000 stats per second. The performance does at least scale when you
run on more clients. The MDS itself can handle ~60,000 stats per second if you
have multiple clients running in parallel.
More gory detail here;
http://www.sanger.ac.uk/Users/gmpc/presentations/SC06-lustre-BOF.pdf
Cheers,
Guy
--
Dr. Guy Coates, Informatics System Group
The Wellcome Trust Sanger Institute, Hinxton, Cambridge, CB10 1HH, UK
Tel: +44 (0)1223 834244 x 6925
Fax: +44 (0)1223 496802
More information about the Beowulf
mailing list