[Beowulf] first cluster
hahn at mcmaster.ca
Mon Jul 19 06:47:53 PDT 2010
> It's a very neat idea, but it has the disadvantage - unless I'm
>misunderstanding - that if the job fails, and leaves droppings in, say, /tmp
>on the cluster node, the user can't log in to diagnose things or clean up
my organization has ~4k users (~3-500 active at any time), and does not
attempt to prevent access to compute nodes by users. it just doesn't
seem like a real, worth-solving problem. heck, we have more trouble
with users running jobs on _login_ nodes, rather than compute notes.
(many of our systems came with a pam-slurm module which did this;
we remove it.)
I don't think this is at all surprising. if a user groks clusters
at all, they'll know that cheating is not very effective (and not very
scalable) and stands a good chance of bringing trouble.
those who don't grok wind up running on the login nodes
(where we have fairly tight RLIMIT_AS and CPU...)
regards, mark hahn.
More information about the Beowulf