managing user accounts without NIS
Donald Becker
becker at scyld.com
Thu May 25 10:27:39 PDT 2000
On Thu, 25 May 2000, Robert G. Brown wrote:
> On Wed, 24 May 2000, Donald Becker wrote:
>
> > This execution time includes shipping an arbitrary binary over to the remote
> > node, rather than loading it from the local disk. For small processes where
> > the executable is not already in the buffer cache this option is faster than
> > loading it from the disk. I presume the 'rsh' tests are done with a
> > preloaded buffer cache?
>
> Effectively. After the first interation it would be anyway -- I run 100
..
> Can you still push the binary into a local cache to avoid the network
> hit on the 2nd-Nth invocations (of a binary you execute repeatedly,
> since none of this matters for binaries that are executed a few times a
> day)? I'll have to grab bproc and play with it...sounds fun.
No, with bproc the controlling process explicitly decides if it will ship
the binary and/or the libraries over to the remote machine. A memory
segment that is moved this way is not saved in the buffer cache for a
subsequent run - it's immediately flushed.
Generally the libraries reside on the cluster client nodes, and are not
moved with the application binary. An important detail is that the libraries
must be identical on all machines, not just similar.
> I'll change rshbench to use /usr/bin/uptime instead of /bin/date (which
> I chose for identical reasons -- small binary, cheap call).
Yes, 'uptime' take a little bit more time to execute, but is much more
interesting.
> It sounds like you have designed bproc to operate in rootspace and avoid
> (most of?) the overhead of starting a shell at all. I think that I'll
The communication (and all policy) is done by a user-level deamon.
The VM area mechanism and process control is done deep within the kernel.
It would be conceptually cleaner to have it all done within the kernel, but
opening files and sockets inside the kernel is complicated. A user-level
program can do a much better job of recovering from the errors and
failures. There is little overhead in partitioning the functionality
between the two.
Donald Becker becker at scyld.com
Scyld Computing Corporation http://www.scyld.com
410 Severn Ave. Suite 210 Annapolis MD 21403
More information about the Beowulf
mailing list