managing user accounts without NIS
Robert G. Brown
rgb at phy.duke.edu
Wed May 24 13:11:55 PDT 2000
On Wed, 24 May 2000, Robert G. Brown wrote:
(A bunch of good measurements and some very BAD arithmetic:-)
Let's try again (for the record). To do this "properly" I just spent a
large part of the day really packaging up a reusable remote shell
benchmark program. It is called rshbench, is GPL (obvioiusly) and can
be obtained from
http://www.phy.duke.edu/brahma/ (look for links)
or directly from (current release via symlinks)
http://www.phy.duke.edu/brahma/rshbench.tgz
http://www.phy.duke.edu/brahma/rshbench.rpm
http://www.phy.duke.edu/brahma/rshbench.src.rpm
I'd advise getting the src.rpm or tarball just to have the full sources
to play with (as well as the README and so forth). It's not quite
autodocumenting -- it assumes you know how to set up rsh and/or ssh for
passwd free access (at least for testing purposes).
The package includes a little binary called "microtime" that is a
wrapper for gettimeofday. This probably won't allow microsecond
resolution of /bin/sh timings, but it goes way beyond what is possible
with date +%s. Feel free to reuse this in your own benchmark scripts if
you like. I also worked out the appropriate (reusable) awk incantations
for doing the timing delta arithmetic and have a little sed scriptlet
that extract the CPU/MHZ of source and target.
It would be nice to have something that would add in base network speed
and latency, but that will probably have to wait until I merge this with
e.g. lmbench. Or until somebody else contributes it back. I don't know
of any way to pull it from /proc or elsewhere (although possibly dmesg
output from net initialization might do it).
Playing with this has already taught me a great deal. The most
important one is that if ssh is used with pam and with
/etc/ssh/ssh_known_hosts nfs mounted, the overhead goes through the roof
-- it seems to add roughly 1.4 seconds PER CALL. I can only conclude
that whatever additional authentication pam does, it is slow as molasses
(nsf shouldn't be that expensive because it caches). I haven't tried
rsh with pam comparably because the department systems person would
probably get very annoyed if I enabled rsh on any pair of hosts, at
least without telling him. Maybe tomorrow.
Results from the runs are included below. ganesh to brahma includes a
pam layer as well as ssh. All networks are switched 100BT.
In a nutshell:
ssh adds .17 seconds compared to (roughly .1 second baseline) rsh per
call
scp adds the same .17 seconds per call PLUS roughly .36 seconds per
megabyte, using default idea encryption.
using pam on top of ssh with the following /etc/pam.d/ssh file:
#%PAM-1.0
auth required /lib/security/pam_pwdb.so shadow
auth required /lib/security/pam_nologin.so
account required /lib/security/pam_pwdb.so
password required /lib/security/pam_cracklib.so
password required /lib/security/pam_pwdb.so shadow nullok use_authtok
session required /lib/security/pam_pwdb.so
adds roughly 1.3-1.4 seconds per call to the .17 seconds (where I'm
sloppy because the CPU's tested are somewhat slower and it apparently
matters). If the additional slowdown is due to pam, which will take me
some time to verify for sure.
Hope somebody else finds this useful. I was startled to see how
expensive pam is, and whether or not one uses ssh or rsh in a beowulf
(which I'd still say is moot) one should definitely think twice about
layering (probably either one) with pam!
I'd actually appreciate it if somebody using rsh and pam (with similar
authentication modules) would run rshbench and post the results. If
anyone else wants to validate the results posted below in their own
environments and post them back that would be great as well. I'll
collect all the postings and add them to the rshbench README.
BTW, I'm not planning on distributing this personally for a huge amount
of time. I'm hoping to talk to Larry McVoy and adding this (and some
other stuff I'm working on) to lmbench. I think it would be very useful
to have a full suite of automated tools to build a microbenchmark
profile of all sorts of system parameters (somewhat like what lmbench
attempts to provide) both for beowulf engineering and for LAN
engineering purposes.
rgb
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
#========================================================================
# rshbench from eve to lucifer at Wed May 24 15:13:30 EDT 2000
# eve CPU is Celeron (Mendocino) at 400.913445 MHz
# lucifer CPU is Celeron (Mendocino) at 467.733014 MHz
# All averages over 100 loops
#========================================================================
# 100 empty loops...done
TOTAL time for 100 empty loops: 0.821775
# 100 rsh lucifer date > /dev/null's...done
AVERAGE time for rsh lucifer date > /dev/null : 0.115664
# 100 1K rcp's...done
AVERAGE time for rcp /tmp/1kfile lucifer:/tmp/1kfile > /dev/null : 0.126040
# 100 1M rcp's....done
AVERAGE time for rcp /tmp/1Mfile lucifer:/tmp/1Mfile > /dev/null : 0.223558
#========================================================================
# rshbench from eve to lucifer at Wed May 24 15:07:42 EDT 2000
# eve CPU is Celeron (Mendocino) at 400.913445 MHz
# lucifer CPU is Celeron (Mendocino) at 467.733014 MHz
# All averages over 100 loops
#========================================================================
# 100 empty loops...done
TOTAL time for 100 empty loops: 0.819606
# 100 ssh lucifer date > /dev/null's...done
AVERAGE time for ssh lucifer date > /dev/null : 0.280453
# 100 1K scp's...done
AVERAGE time for scp /tmp/1kfile lucifer:/tmp/1kfile > /dev/null : 0.304643
# 100 1M scp's....done
AVERAGE time for scp /tmp/1Mfile lucifer:/tmp/1Mfile > /dev/null : 0.748540
#========================================================================
# rshbench from lucifer to eve at Wed May 24 15:21:06 EDT 2000
# lucifer CPU is Celeron (Mendocino) at 467.733014 MHz
# eve CPU is Celeron (Mendocino) at 400.913445 MHz
# All averages over 100 loops
#========================================================================
# 100 empty loops...done
TOTAL time for 100 empty loops: 0.755916
# 100 rsh eve date > /dev/null's...done
AVERAGE time for rsh eve date > /dev/null : 0.123906
# 100 1K rcp's...done
AVERAGE time for rcp /tmp/1kfile eve:/tmp/1kfile > /dev/null : 0.133654
# 100 1M rcp's....done
AVERAGE time for rcp /tmp/1Mfile eve:/tmp/1Mfile > /dev/null : 0.244019
#========================================================================
# rshbench from lucifer to eve at Wed May 24 15:17:48 EDT 2000
# lucifer CPU is Celeron (Mendocino) at 467.733014 MHz
# eve CPU is Celeron (Mendocino) at 400.913445 MHz
# All averages over 100 loops
#========================================================================
# 100 empty loops...done
TOTAL time for 100 empty loops: 0.779981
# 100 ssh eve date > /dev/null's...done
AVERAGE time for ssh eve date > /dev/null : 0.295183
# 100 1K scp's...done
AVERAGE time for scp /tmp/1kfile eve:/tmp/1kfile > /dev/null : 0.312095
# 100 1M scp's....done
AVERAGE time for scp /tmp/1Mfile eve:/tmp/1Mfile > /dev/null : 0.794646
#========================================================================
# rshbench from ganesh to brahma at Wed May 24 15:29:42 EDT 2000
# ganesh CPU is Pentium II (Klamath) at 300.687645 MHz
# brahma CPU is Pentium II (Deschutes) at 397.333643 MHz
# All averages over 100 loops
#========================================================================
# 100 empty loops...done
TOTAL time for 100 empty loops: 1.058280
# 100 ssh brahma date > /dev/null's...done
AVERAGE time for ssh brahma date > /dev/null : 1.867719
# 100 1K scp's...done
AVERAGE time for scp /tmp/1kfile brahma:/tmp/1kfile > /dev/null : 1.779430
# 100 1M scp's....done
AVERAGE time for scp /tmp/1Mfile brahma:/tmp/1Mfile > /dev/null : 2.131818
More information about the Beowulf
mailing list