Creating user accounts....

Robert G. Brown rgb at
Fri Feb 14 07:09:53 PST 2003

On Thu, 13 Feb 2003, Brian D. Ropers-Huilman wrote:

> My understanding is that NIS can be very network intensive and there are 
> limits to the sizes of the maps unless you go to NIS+. Anytime a permission 
> needs to be check, like when a user is accessing a file, it is an NIS call to 
> see what UID the user has. This seems a little ridiculous to me, but that is 
> what I recall from discussions on this list in the past (you could search the 
> archives--this has been discussed at length before).

I think that you're right on the money.  HOWEVER, two observations:

  a) IIRC, one can overcome at least the network hit by installing
cache-only servers on each node.  That way, once the maps are pushed to
the nodes one time, thereafter they tend to be answered from the local
memory cache and hence are very fast and not horribly high overhead.
I've never actually done this, though, and could be mistaken in my
recollection of others who have.

  b) Just because NIS is "very network intensive" on the basis you
outline above doesn't mean that it will be a significant bottleneck in
all parallel computations or cluster configurations.  It very much
depends on what you are doing.  For a coarse grained or embarrassingly
parallel computation that is CPU and/or memory bound (not heavily I/O
intensive at any phase but perhaps the beginning and end of a long
computation) NIS can be inefficient -- and utterly negligible, as
overhead goes.

We actually use NIS for account authentication on our cluster because
our internal network is mostly flat (entirely switched, so traffic
remains isolated, but no router/gateways between workstations, servers,
nodes).  Authentication is thus equally flat.  Since a typical usage
pattern on our nodes is to start a job and run it for hours to days with
little or no I/O, NIS load isn't an issue, and using something else
would be much more painful.

> Also, my understanding is that with plain NIS the largest map you can have is 
> 1024 characters. If you have a lot of graduate students in a single group, you 
> can quickly reach this limit. NIS+ overcomes this but I don't know if it has a 
> Linux port or not.

See the NIS HOWTO, which addresses all this.  The solution to this
particular problem is to break up a long group into several "sub" groups
with the same gid.  gid is all the matters -- the first entry with a gid
is the true "name" of the group.


> Finally, yes, you would do well to wrap your useradd command with a script 
> that added the user and then pushed out all relevant files (/etc/passwd, 
> /etc/shadow, /etc/group, etc.).
> On Thu, 13 Feb 2003, Srihari Angaluri wrote:
> > Is there any serious performance/scalability issue to using NIS, as
> > opposed to copying the individual files to each and every node on the
> > cluster? Is this even a desirable option for large clusters, for
> > example? What if I need to add more accounts? I have to copy the files
> > all over again, right? Of course I can write scripts to automate the
> > whole process, but why not maintain a central user account database
> > using NIS? Can someone please elaborate on what the side effects are to
> > using NIS?
> > 
> > Srihari
> --  
> Brian D. Ropers-Huilman                        (225) 578-0461 (V)
> Systems Administrator                 AIX      (225) 578-6400 (F)
> Office of Computing Services       GNU Linux   brian at
> High Performance Computing            .^.
> Fred Frey Building, Rm. 201, E-1Q     /V\                          \o/
> Louisiana State University           (/ \)           --  __o   /    |
> Baton Rouge, LA 70803-1900           (   )          --- `\<,  /    `\\,
>                                      ^^-^^              O/ O /     O/ O
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) visit

Robert G. Brown	             
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at

More information about the Beowulf mailing list