[Beowulf] Putting /home on Lusture of GPFS
Christopher Samuel
samuel at unimelb.edu.au
Tue Dec 23 15:33:13 PST 2014
On 24/12/14 04:12, Prentice Bisbal wrote:
> I have limited experience managing parallel filesytems like GPFS or
> Lustre. I was discussing putting /home and /usr/local for my cluster on
> a GPFS or Lustre filesystem, in addition to using it just for /scratch.
We've been using GPFS for project space (which includes our home
directories) as well as our scratch and HSM filesystems since 2010 and
haven't had any major issues. We've done upgrades of GPFS over that
time, the ability to do rolling upgrades is really nice (plus we have
redundant pairs of NSDs so we can do hardware maintenance).
Basically:
Project space:
* Uses filesets with quotas to limit a projects overall usage
* Uses GPFS snapshots both for easy file recovery and as a target for
TSM backups
* Dedicated LUNs on IB connected DDN SFA10K with 1TB SATA drives
Scratch space:
* Any project that requests scratch space gets their own group writeable
fileset (without quotas) so we can easily track space usage.
* All LUNs on IB connected DDN SFA10K with 900GB SAS drives
HSM space:
* Uses filesets without quotas, except when projects exceed their
allocated amount of tape+disk when we impose an immediate cap until they
tidy up
* Dedicated LUNs on IB connected DDN SFA10K with 1TB SATA drives (same
controllers as project space)
We kept a few LUNs up our sleeves on the SATA SFA10K, just in case..
All the best,
Chris
--
Christopher Samuel Senior Systems Administrator
VLSCI - Victorian Life Sciences Computation Initiative
Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545
http://www.vlsci.org.au/ http://twitter.com/vlsci
More information about the Beowulf
mailing list