[Beowulf] RedHat Satellite Server as a cluster management tool.

Robert G. Brown rgb at phy.duke.edu
Thu Oct 14 10:39:55 PDT 2004

On Wed, 13 Oct 2004, Michael T. Halligan wrote:

> Has anybody used (or tried to use) the RHN system as a HPC management 
> tool. I've implemented this
> successfully in a 100 host environment for a customer of mine, and am in 
> the process of
> re-architecting an infrastructure with about 150 nodes.. That's about as 
> far as I've gotten
> with it. Once I get past the cost, the poor documentation, and "OK" 
> support, I'm finding
> that it's actually a great (though slightly immature) piece of software 
> for the enterprise.  The ease of keeping
> an infrastructure in sync, and tthe lowered workload for sysadmins

<nuke warning="alert"> 

I can only say "why bother".  Everything it does can be done easier,
faster, and better with PXE/kickstart for the base install followed by
yum for fine tuning the install, updates and maintenance (all totally
automagical).  Yum is in RHEL, is fully GPL, is well documented, has a
mailing list providing the active support of LOTS of users as well as
the developers/maintainers, and is free as in air.  Oh, and it works
EQUALLY well with Centos, SuSE, Fedora Core 2, and other RPM-based
distros, and is in wide use in clusters (and LANs) across the country.

With PXE/kickstart/yum, you just build and test a kickstart file for the
basic node install (necessary in any event), bootstrap the install over
the net via PXE, and then forget the node altogether.  yum automagically
handles updates, and can also manage things like distributed installs
and locking a node to a common specified set of packages.  It manages
all dependencies for you so that things work properly.

It takes me ten minutes to install ten nodes, mostly because I like to
watch the install start before moving on to handle the rare install that
is interrupted for some reason (e.g. a faulty network connection).  One
can do a lot more than this much faster if you control the boot strictly
from PXE so you don't even need to interact with the node on the console
at all.  How much better than that can you do?  

Alternatively, there are things like warewulf and scyld where even
commercial solutions probably won't work out to be much more (if any
more) expensive.  Especially when you add in the cost of those two
"beefy boxes acting as RHN servers".  What a waste!  We use a single
repository to manage installs and updates for our entire campus (close
to 1000 systems just in clusters, plus that many more in LANs and on
personal desktops).  And the server isn't terribly beefy -- it is
actually a castoff desktop being pressed into extended service, although
we finally have plans to put a REAL server in pretty soon.

I mean, what kind of load does a cluster node generally PLACE on a
repository server after the original install?  Try "none" and you'd be
really close to the truth -- an average of a single package a week
updated is probably too high an estimate, and that consumes (let's see)
something like 1 network-second of capacity between server and node a
week with plain old 100BT.

There are solutions that are designed to be scalable and easy to
understand and maintain, and then there are solutions designed to be
topdown manageable with a nifty GUI (and sell a lot of totally unneeded
resources at the same time).  Guess which one RHN falls under.

  Flamingly yours (not at you, but at RHN)


> At 100 nodes, the pricing seems to be about $274/year per node including 
> licensing, entitlements, and the
> software cost of a RHN server (add another $5k-$7k for a pair of beefy 
> boxes to act as the
> RHN server.. though as far as I can tell, redhat's specs on the RHN 
> server are far exagerrated.. I
> could get by with $2500 worth of servers on that end for the 
> environments I've deployed on).  So, in the
> end, $28k/year for an enterprise of 100 servers, in one environment has 
> meant being able to shrink the
> next year staffing needs by 2 people, and in one by one person, it pays 
> for itself..
> We have a 512 node render farm project we're bidding on for a new 
> customer, and I'm wondering how those in the
> beowulf community who have used RHN satellite server perceive it. So far 
> we're considering LFS and Enfusion,
> which are both more HPC oriented, but I'm really enjoying RHN as a 
> management system.
> ----------------
> BitPusher, LLC
> http://www.bitpusher.com/
> 1.888.9PUSHER
> (415) 724.7998 - Mobile
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

More information about the Beowulf mailing list