[Beowulf] (no subject)
Mark Hahn
hahn at mcmaster.ca
Fri Feb 16 14:17:57 PST 2007
> not buy a tape drive for backups. Instead, I've got a jury-rigged backup
tapes suck. I acknowlege that this is partly a matter of taste,
experience and history, but they really do have some undesirable properties.
> scheme. The node that serves the home directories via NFS runs a nightly tar
> job (through cron),
> root at server> tar cf home_backup.tar ./home
> root at server> mv home_backup.tar /data/backups/
>
> where /data/backups is a folder that's shared (via NFS) across the cluster.
> The actual backup then occurs when the other machines in the cluster (via
> cron) copy home_backup.tar to a private (root-access-only) local directory.
>
> root at client> cp /mnt/server-data/backups/home_backup.tar
> /private_data/
>
> where "/mnt/server-data/backups/" is where the server's "/data/backups/" is
> mounted, and where /private_data/ is a folder on client's local disk.
did you consider just doing something like:
root at client> ssh -i backupkey tar cf - /home | \
gzip > /private_data/home_backup.`date +%a`.gz
I find that /home contents tend to be compressible, and I particularly
like fewer "moving parts". using single-use ssh keys is also a nice trick.
> large (~4GB). When I try the cp command on client, only 142MB of the 4.2GB
> is copied over (this is repeatable - not a random error, and always about
> 142MB).
might it actually be be sizeof(tar)-2^32? that is, someone's using a u32
for a file size or offset? this sort of thing was pretty common years ago.
(isn't scientific linux extremely "stable" in the sense of "old versions"?)
> only some of the file be copied over? Is there a limit on the size of files
> which can be transferred via NFS? There's certainly sufficient space on disk
it's certainly true that old enough NFS had 4GB problems, as well as
similar vintage user-space tools.
More information about the Beowulf
mailing list