[Beowulf] RE: IOZONE

Imran Khan Imran at workstationsuk.co.uk
Tue Jan 17 01:45:43 PST 2006


I thought you might be interested in these IOZONE numbers for TerraGrid.

We have recently been asked by a global investment bank to run some
tests over Infiniband to compare performance with the Texas Memory
Systems fibrechannel connected Soild State Disc. They wanted to use the
device for end of month trade consolidation, and wanted a sustained
10,000 IOPs.  The TMS box sustained 5,000 IOPs.

I have included below output IOZONE running on 2 x TerraGrid bricks over

Two TerraGrid storage bricks writing to magnetic disc, (not even the
SSD) sustain ....68,000 IOPs. 

10Gig Ethernet has similar performance to Infininband.

Below are the latest performance numbers collected using SDP over IB.
The file also contains the results of "vmstat 1" that show that CPU
utilization is around 25% despite the extremely high IOP rates (68K)
being maintained. Please note the following:

a) The test configuration consisted of:

1 x Client:		Dual Opterons, 2GB RAM, IB HCA
2 x Servers:	Dual Opterons, 8GB RAM, IB HCA, 3xLSI RAID cards (each 
with 4 disks)

b) Note that we did *NOT* actually set up a RAMDdisk on the servers,
choosing instead to let the server side cache handle the requests. This
means that the configuration tested is non-volatile (disk resident
data), and data will persist after reboots, power-cycling, etc.

c) A key benefit is that you can add as many servers as you wish, and
have multiple clients operating on the same "global RAMdisk" while
scaling up system-wide aggregate IOP rates. This will allow you to bring
many CPUs to bear on your pool of storage via a simple POSIX-compliant
file system interface.

d) Above is a study done by Sandia National Laboratories that shows
that 10-GigE has even better latency and throughput characteristics than
SDP over IB. So there is room for further improvement beyond the numbers
presented herein.


More information about the Beowulf mailing list