Need comments about cluster file systems
Jim Lux
James.P.Lux at jpl.nasa.gov
Fri Nov 15 09:55:19 PST 2002
While the referenced writeup does provide some quantitative data (all too
hard to come by), and describes the test methodology (even more hard to come
by), it doesn't really have any nice summary of "what does this all mean?"
other than a comment about it's still being tuned, etc.
Especially with all the pretty graphs, it would be nice to provide an
interpretation... Some of the performance graphs show a maxima, and other
than just providing a calculation of metrics scaled to "per node", no
comment is made as to why the maxima might exist, what might shift the peak,
etc. Or, how one could generalize these results (bearing in mind that YMMV
for all benchmark'y kinds of things).. For instance, is the maxima in
performance a manifestion of some optimal I/O:Compute node ratio, depending
on the relative performances (i.e. number of disk drives) or interconnect
latency/bandwidth.
A useful extension of this study (which, by the way, I DO find quite
interesting) would be to examine the effect of varying the interconnect
performance. There was a bit of a parametric study of changing disk
performance (multiple disks per I/O node shown in Figure 7), but kind of
just presented a fairly obvious conclusion of more disks is better, but
didn't really address why there's a peak that shifts.. is there some
bottleneck that gets clogged (in an ALOHA/CSMA or virtual memory page
thrashing sort of way), and is that bottleneck the CPU, the I/O interface to
the disk, the internal overhead of keeping stuff synchronized, etc.
----- Original Message -----
From: "Kumaran Rajaram" <kums at CS.MsState.EDU>
To: "Jeff Layton" <jeffrey.b.layton at lmco.com>
Cc: <beowulf at beowulf.org>; "Philippe Blaise - GRENOBLE" <pblaise at cea.fr>
Sent: Friday, November 15, 2002 8:21 AM
Subject: Re: Need comments about cluster file systems
>
> More information regarding the PVFS performance can be obtained from the
> site below:
>
> http://www.dell.com/us/en/esg/topics/power_ps4q02-kashyap.htm
>
> It is interesting to note the read/write performance to scale well for
> 16 I/O nodes.
More information about the Beowulf
mailing list