Questions and Sanity Check
Daniel Ridge
newt at scyld.com
Thu Mar 1 09:54:28 PST 2001
On Thu, 1 Mar 2001, Dan Yocum wrote:
> Daniel Ridge wrote:
> Since I haven't built/booted a Scyld cluster yet, and have only seen Don
> talk about it at Fermi, please excuse my potentially naive comments.
>
>
> > For people who are spending a lot of time booting their Scyld slave nodes
> > -- I would suggest trimming the library list.
> >
> > This is the list of shared libraries which the nodes cache for improved
> > runtime migration performance. These libraries are transferred over to
> > the nodes at node boot time.
>
>
> Hm. Wouldn't it be better (i.e., more efficient) to cache these libs on
> a small, dedicated partition on the worker node (provided you have a
> disk available, of course) and simply check that they're up-to-date each
> time you boot and only update them when they change, say, via rsync?
Possibly. We're working on making available versions of our software that
simulateously host multiple pid spaces from different frontends. In this
situation, you could wind up needing 1 magic partition per frontend -- as
each master could have its own set of shared libraries.
Also, I think Amdahl's law kicks in and tells us that the potential
speedup is small in most cases (with respect to my trimming comment
above) and that there might be other areas that are worth more attention
in lowering boot times. On my VMware slave nodes, it costs me .5 seconds
to transfer my libraries but still takes me the better part of a minute to
get the damn BIOS out of the way.
Regards,
Dan Ridge
Scyld Computing Corporation
More information about the Beowulf
mailing list