Installing Linux (without CD/floppies)

Ashley Pittman ashley at
Tue Feb 18 11:16:59 PST 2003

On Tue, 2003-02-18 at 17:36, Robert G. Brown wrote:
> > My reasoning was to get the node install time as *low* as possible and I
> > was assuming that kick-start wasted to many cycles/bandwidth on stuff
> I'm not certain about this.  A package is typically compressed (to be
> network efficient) as the network and disk write rates are likely to be
> the rate limiting steps, not the decompression.  The CPU required for
> decompression and installation %post configuration is "free" anyway.  A
> tgz image probably requires very similar amounts of network bandwidth
> and decompression CPU -- at most it saves some of the %post processing.

It's not an assumption I've ever tested to be honest,  my time would
probably be better spent refining the list of packages that get
installed, and then wasted again in a few weeks time when I realise that
in actual fact I do need a compiler on every node.  So, I just going
leave it as it is and if I ever am in a situation where I'm waiting for
a node re-install then I will use the time to drink more coffee.

> > that would just end up the same anyway. kick-start wouldn't allow you to
> > customise the image as much either.
> I'd have to disagree with this and would indeed roll the opposite way.
> kickstart installs already automagically probe for hardware and manage
> moderate differences between e.g. sound and video and network and disk
> configuration.  To accomodate these with diskless images, one has to
> build the diskless image for EACH supported hardware configuration
> separately.  This can be quite time consuming, because one WON'T
> generally have e.g. kudzu or some other probe/configuration tool to do
> most of the work for you, one has to do it by hand or by somehow running
> an "install" on a mounted template on an architypical node.
> One is always free to build as many kickstart images as you like or need
> for distinct configurations; customizing them is likely as simple as
> cp basicnode.ks customnode.ks
> emacs customnode.ks
>  (add or delete packages, change hard configuration parameters)
> emacs /etc/dhpcd.conf or whatever
>  (direct nodes to use customnode.ks on their standard install)
> and then boot the install.  Alternatively, for slightly different
> one-of-a-kind nodes (could we install the compiler group only on THIS
> node) that are standard plus a package or two, one can do a kickstart
> install to a basicnode.ks state, then yum install pkgname.

I'd forgotten about heterogeneous clusters, that does make things more

You only need to do the hardware detection *once* though, and then cache
everything in a per-host configuration file/script.

I would use "apt-get" rather than "yum" by choice but I don't want to go
anywhere near that argument :)

> > This would take some effort to setup the "install" script and possibly
> > wouldn't be worth it because it would only save a few minutes (in
> > parallel) of unattended time while the node installs itself.
> Yeah, that's the rub;-) Once things are bleeding edge efficient and
> scalable, it stops being worth it to screw around saving even
> significant fractions of the little time things take, especially
> unattended time.  Five minutes or ten minutes of network install time
> per node is pretty irrelevant, as long as I don't have to wait around
> for either one to complete and as long as both complete without incident
> on request.

It's only irrevelevent sometimes, you say yourself that the COD project
is aiming for "minutes" to do an install.  It all depends on how often
you see yourself re-installing, for most cluster people it can probably
be classed as seldom.

Of course in the nfs-root world there is no such thing as "installing",
you just change the export/mount options (assuming you share the root fs
across machines).

There is also a distinction between diskless and nfs-root, I chose to
use nfs-root on my home machines because it allows greater flexibility
in what software is running but I still have swap, /tmp and /local
running of the local hard disk.  In the days before grub it was really
handy to get the kernel over the network to.

> > However... Once you have got the node-install time low enough then you
> > have the possibility of requesting node configurations when you submit
> > your jobs, they could install with a different flavour of the os and
> > different kernels depending on what requirments the job has.  I've not
> > heard of anybody doing this though so perhaps it isn't desirable but I
> > know I can think of uses for it.
> This is the goal of the "computing on demand" (COD) project at Duke.
> Justin Moore is working on The Ultimate Environment Loader.  With it,
> one will be able to select, in real time, to boot into the entire
> install image of your choice including OS, kernel, work environment.
> He's shooting for times of order minutes for the "boot/install" to
> complete.  With this nodes can be dynamically reconfigured to work on
> completely distinct projects with completely distinct operating systems
> and work environments and user account and security environments, giving
> one a whole new perspective on scheduling and resource sharing.  Need a
> freebsd node?  Boot it that way.  Need a linux node within your
> organizational domain?  Not a problem.  Plan to break the hell out of
> your node installation while doing research on e.g. operating systems,
> networks, kernels?  Fine, break away and reinstall the base image when
> you're done.  Even WinXX, in principle, loaded on demand and it just
> goes away when the demand ceases (in practice, of course, one has to
> manage all sorts of copyright issues that I don't want to even think
> about, or break the law:-).
> I think that this will be very, very cool and might even change the very
> way we think about compute resources from the desktop on down.

I agree, this is very cool and definatly the way forward, not just in
clustering either. Linux-bios will help a great deal to.


More information about the Beowulf mailing list