[Beowulf] GPFS on Linux (x86)
Kumaran Rajaram
krajaram at lnxi.com
Wed Sep 13 21:36:57 PDT 2006
>>> Craig Tierney <ctierney at hypermall.net> 9/13/2006 5:21 PM >>>
> I really don't seem many people discussing the good and bad things
> about the current crop of distributed/shared filesystems. Do
> they sign a contract saying they can disclose any information about
> their operation?
Well, there are a lot of metrics that goes into the selection of a
file-system for a particular cluster environment. Metrics include
performance(data and metadata), scalability in terms of performance and
capacity, availability/redundancy, management/problem diagnostics, ease
of installation/upgrade, OS/interconnect/hardware/storage device
support, price per GB, support structure, backup/HSM support, and FS
being open-source. File-system A might be better than File-system B on a
particular metric, but the decision depends on the overall score (based
on the weights assigned to each metric).
Also, based on day-to-day experience with a particular file-system
(after initial selection), the overall score can change in the due
course of time. Cluster/Parallel file-systems interact with lot of
components (client-component on compute, cluster-interconnect,
server-component on I/O nodes, kernel, SAN or back-end storage device).
Faults/bottlenecks on low-level components gets exposed in the
file-system layer which misleads the user to thinking the file-system
being flaky which is not true. Every components in the storage stack
requires a careful selection process.
Cheers,
-Kums
More information about the Beowulf
mailing list