[Beowulf] file IO benchmark
Chris Samuel
csamuel at vpac.org
Thu Nov 24 15:23:37 PST 2005
On Friday 25 November 2005 00:49, Joachim Worringen wrote:
> Well, the problem is that this is single-process, single-file benchmark,
> which is often very different from the I/O that a parallel application
> performs.
Correct, but IMHO you'll need to get those sorts of baseline figures from
Bonnie++ before going to parallel benchmarks in order to know whether any
problems you may see from the parallel tests are the result of something
unique to parallel access or something more fundamental to the storage
architecture.
It's kind of like debugging network protocols, start at the bottom layer and
work up..
For instance when we had to move from RH7.3 to RHEL3 for our storage server
(some time ago) we found that we were getting all sorts of odd NFS problems,
people getting stale NFS file handles, not being able to check code out
through Subversion, etc. Bonnie++ helped us track that down to the fact that
RHEL3 had really really bad ext3 I/O performance [1] and all our NFS daemons
were getting stuck in device waits - knocking those numbers up to 70 or more
just resulted in 70 or so stuck with one or two free and an impressive load
average.
When we switched to using Fedora Core 3 and XFS our NFS problems evaporated
and we had great I/O performance again..
[1] - this appeared to be because of RH's backporting of 2.6 features, as
people on RH's bugzilla reported their problems went away when they booted
with a Redhat 9 kernel instead of a RHEL3 kernel (unfortunately that wasn't
an option for us then).
cheers!
Chris
--
Christopher Samuel - (03)9925 4751 - VPAC Deputy Systems Manager
Victorian Partnership for Advanced Computing http://www.vpac.org/
Bldg 91, 110 Victoria Street, Carlton South, VIC 3053, Australia
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20051125/5ec1f1a2/attachment.sig>
More information about the Beowulf
mailing list