[Beowulf] SSH without login in nodes
Robert G. Brown
rgb at phy.duke.edu
Sun May 6 08:40:04 PDT 2007
On Sun, 6 May 2007, Chris Samuel wrote:
> On Sun, 6 May 2007, Kilian CAVALOTTI wrote:
>
>> Not that ugly, actually. But what if users do a
>> ssh node -t "bash --noprofile"? ;)
>
> Then if any of the 500 odd tried we would spot them with some other scripts
> and chase them about it. We've not had to do that yet, though, fortunately!
Yes, this is the other solution. Do nothing fancy in script-land. Just
tell your user base "Do Not Login To The Nodes Directly And Run Jobs".
Put up a TRIVIAL script to monitor and mail admin if someone should do
so. Then keep a sucker rod handy to punish offenders (with the direct
support and authorization to chasten from the cluster's owner(s)).
In most cases with a moderate size user base, you'll have at most one or
two offenses, will whack the offenders upside the head mouthing phrases
like "loss of privileges to use the cluster at all", word will get out,
and things will be just fine. If you organize the cluster on an
isolated network so that the nodes are only visible "through" the head
node, most users will never even bother to work out "how" they can login
to nodes directly, especially if you tell them that You Will Be Watching
and They'd Better Not If They Know What Is Good For Them.
This MIGHT not work for a cluster with a very large, very dynamic, user
base -- a Grid-like environment or a large public cluster with 1000
potential users. I would bet that one could make it work even then with
minimal effort, but there is no doubt that you'd be bopping folks more
often as a large population is bound to have a wise-ass would-be hacker
in it. Find them, bop them, offer them a job.
rgb
>
>> To handle of SSH based MPI launchers, we've disabled user logins from our
>> frontend node to the compute nodes, but allowed them between compute
>> nodes. So that the scheduler takes care of dispatching the initial process
>> on a first node (no SSH involved), and then SSH connections can be used to
>> dispatch the MPI daemons on the other nodes, from the initial one.
>
> Now that there's the Torque PAM module (pam_pbssimpleauth) that Garrick wrote
> I'm tempted to set that up, but given our current system works I haven't
> dared break it. :-)
>
> cheers!
> Chris
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list