HD cloning

Daniel Ridge newt at scyld.com
Tue Dec 5 09:36:55 PST 2000


Bruce,

On Wed, 6 Dec 2000, Bruce Janson wrote:

> Like you, installing makes me grumpy too, so I try not to do it
> more than once.  Ideally all of our compute servers would share
> the same (network) file system.  There are ways of doing this
> now (typically via NFS) but they tend to be hand-crafted and
> unpopular.
> In particular, I notice that the recent Scyld distribution
> assumes that files (libraries if I remember rightly) will be
> installed and available on the local computer.
> Why do people want to install locally?  (Scyld people in particular
> are encouraged to reply.)

While it is true that our (Scyld's) distribution places some files
on target nodes, the total volume is pretty tiny (a couple of tens of
megabytes for now, less in the future). These files, essentially
all shared libraries, are placed on the nodes just as a cache and
are not 'available' from most useful perspectives. They are 'available'
for a remote application to 'dlopen()' or certian other dynamic link
operations.

In addition to shared libraries, we also place a number of entries
for '/dev' on the nodes.

I have a couple of instances of VMware running on my laptop now, and
excluding '/dev' and '/proc', the nodes have the following:

----------------------------------------------------------------------------
/etc/mtab			/etc/localtime		/etc/ld.so.cache
/etc/nsswitch.conf		/tmp			/scratch
/usr/lib/libtk8.0.so		/usr/lib/libtixsam4.1.8.0.so
/usr/lib/libcrypto.so.0.9.5	/usr/lib/libstdc++.so.2.7.2.8
/usr/lib/libg++.so.2.7.2.8	/usr/lib/libstyle.so.1.0.3
/usr/lib/libsp.so.1.0.3		/usr/lib/libgtk.so.1.0.6
/usr/lib/libgtk-1.2.so.0.5.1	/usr/lib/libobgtk.so.1.2.1
/usr/lib/libgnomeui.so.32.10.3	/usr/lib/libstdc++-2-libc6.1-1-2.9.0.so
/usr/lib/libmpif.so		/usr/lib/libmpi.so
/usr/lib/libmpi.so.1		/usr/lib/libmpif.so.1
/usr/lib/libstdc++-libc6.1-1.so.2
/usr/lib/libgnomeui.so.32	/usr/lib/libobgtk.so.1
/usr/lib/libgtk-1.2.so.0	/usr/lib/libgtk.so.1	/usr/lib/libsp.so.1
/usr/lib/libstyle.so.1		/usr/lib/libg++.so.2.7.2
/usr/lib/libstdc++.so.2.7.2	/usr/lib/libcrypto.so.0
/lib/libm-2.1.3.so		/lib/libdb-2.1.3.so	/lib/libc-2.1.3.so
/lib/libc.so.6			/lib/libdb.so.3		/lib/libm.so.6
/lib/libnss_bproc.so.2
---------------------------------------------------------------------------

Most of the above are shared libraries. I try to keep the set small
since I run VMware ramdisk rooted. In my case, I told the beowulf
setup script to only transfer libraries greater than 500K. I let the
bproc system migrate other small libraries as I need them.

In the default case, we transfer about 35M worth of shared libraries.

If you were setting up a specialized Scyld cluster for large runs
of a particular application, you may be able to ditch all the libraries
and get the memory footprint of our system down to a couple MB.

Our concept is that the best way to achieve a uniform system image is
to adopt a degenerate case of the problem by eliminating most of the
system image.

Regards,
	Dan Ridge
	Scyld Computing Corporation





More information about the Beowulf mailing list