[Beowulf] While the knives are out... Wulf Keepers
Robert G. Brown
rgb at phy.duke.edu
Mon Aug 21 12:40:19 PDT 2006
On Mon, 21 Aug 2006, Tony Travis wrote:
> Not 'everyone' like me is as stupid or naive as you imply. I have the support
I don't think he was implying that, really -- not worth flaming over,
for sure. Remember that cluster computing as we currently use it was as
much as not INVENTED by people just like you -- academic researchers
building clusters to support their own scientific research. At the very
least they have always been peer participants along with computer
science types or fully engaged cluster admin types and I've never seen
any evidence of anything but respect (or much evidence of serious
differentiation) between these generic groups. They bring slightly
different skills to the table, but it is a big table;-)
Remember I'm just such a type as well. So are a whole lot of primary
contributers on this list. Building and USING a cluster to perform
actual work provides one with all sorts of real world experience that
goes into building your next one, or helping others to do so. Many
people -- e.g. Greg Lindahl or Joe L. or Jim L. -- seem to interpolate
these worlds and more and use clusters to do research, engineer and
manage clusters, do corporate stuff with or for clusters.
Rather what I think he's saying is that in a large cluster environment
where there are many and diverse user groups sharing an extended
resource, careless management can cost productivity -- which is
absolutely true. Examples of careless management certainly include
thoughtlessly updating some mission-critical library to solve a problem
for group A at the expense of breaking applications for groups B and C,
but this can actually be done just as easily by a professional
administrator as by a research group. The only difference is that a
"cluster administrator" is usually professionally charged with not being
so careless and with having the view of the hand and the time to
properly test things and so on. A good cluster administrator takes this
responsibility seriously and may well seek to remain in firm control of
updates and so on in order to accomplish this.
As you observe, ultimately this comes down to good communications and
core competence among ALL people with root-level access for ANY LAN
operation (not just cluster computing -- you can do the exact same thing
in any old LAN). There are many ways to enforce this -- fascist topdown
management by a competent central IT group where they permit "no" direct
user management of the cluster; completely permissive management where
each group talks over any changes likely to affect others but retains
privileges to access and root-manage at least the machines that they
"own" in a collective cluster (yes, this can work and work well and is
in fact workING in certain environments right now), something like COD
whereby any selected subcluster can be booted in realtime into a user's
own individually developed "cluster node image" via e.g. DHCP so that
while you're using the nodes you TOTALLY own them but cannot screw up
access to those same nodes when OTHER people boot them into THEIR own
image, and lots more besides including topdown not-quite-so-conservative
management (which is probably the norm).
At a guess, Really Big Clusters -- ones big enough to have a full time
administrator or even an administrative group -- are going to strongly
favor topdown fascist adminstration as there are clear lines of
responsibility and a high "cost" of downtime. For these to be
successful there have to be equally firm open lines of communication, so
that researchers work is (safely and competently) enabled regardless of
the administration skills of members of any group. Larger shared
corporate clusters are also likely to very often fall into this
category, although there are also many exceptions I'm sure at the
workgroup level. Small research-group owned clusters are likely as not
to be locally owned and operated even today. In between you're bound to
see almost anything.
> of an excellent IT department and an electronics workshop who talk to me and
> understand very well what I want to do with the Beowulf. We have about 400
> user accounts, which are registered and managed by IT centrally. I just
> enable NIS. The IT department also manage the central filers where precious
> data files are stored. I manage 3.2 TB of local RAID on the Beowulf. In my
> opinion this type of cooperation is a lot more effective than strict job
> demarcation...
In some environments, absolutely. However, remember that YMMV is a good
rule for ALL aspects of cluster design and management. If I were
running one of the really really big clusters at (say) Los Alamos or as
a part of Tier 1 ATLAS, I would quit my job altogether before letting
potentially hundreds of grid or cluster users or groups actually install
stuff on the nodes, especially if I were going to be held responsible
for downtime and loss of productivity when something broke because of
it. At this scale I'd want a formal request/testing process and a small
subcluster to do the testing on without question.
OTOH if I were in a shared cluster group with (say) Greg Lindahl and
Mark Hahn, or two or three of the cluster folks I know on campus here
(e.g. Justin, Josh, Bill) who were doing research on some particular
thing or the other, I wouldn't hesitate to rely on good communications
between us and would share management privileges with them. Those guys
are clearly competent and wouldn't casually break something, and if they
did by some accident they'd do it in a context where a) they'd fix it
again instead of making me do it; and b) they'd pay back any time lost
to the general pool if it were worth noticing in the first place. That
pool of systems might still consist of 100's to 1000's of CPUs, mind you
and qualify as a "large" cluster -- the YMMV thing depends on MANY
variables such as the personalities and competence of the actual humans
involved, the kind of the cluster, the kind of WORK done on the cluster
(vanilla computing clusters are nearly invulnerable to updates from the
normal distro stream, actually, how good communications are between the
participants (do they "know each other from a distance" or drink beer
together on Friday nights from time to time:-) and much more.
>
>> For example, on friday, one of our applications analysts wanted to upgrade
>> a piece of software on one of the clusters. He didn't know what it would
>> affect (libraries, other installed software, users already using that
>> software). After a bit of investigation it turned out that the PI in
>> question could use the version already installed (which is about 6 months
>> old).
>
> Seems to me that it would be straight-forward to know this if you use a
> package management system like apt or rpm, which keeps track of what's
> installed and what the dependencies are. However, I also think that it's
> quite right that you should know more about this than him. In an ideal world,
> you should both make the decision about what to do on a rational basis. I
> doubt that he asked you to do it for no reason at all.
>
>> I guess that I'm rather "old school" but upgrades have to be for a reason
>> other than there's a new version. Maybe they are needed for features, or
>> security, or stability. But IMO, they are seldom needed because they are
>> new.
Even here I think YMMV is a safer thing to say. I generally think it is
a good thing to leave a LAN or cluster hooked into a distro's yum/apt
update stream if at all possible (and as the default behavior). Most
updates either fix a serious problem (like a real bug or open security
hole) or add features to an irrelevant GUI-level tool. You can choose
to be conservative or aggressive -- Centos/RHEL or FCX, for example,
even within this schema. Yum permits you to easily lock down any
particular libraries you know of as being mission critical so that you
can test any updates before releasing them, so one can be BOTH
conservative where it matters and liberal where it doesn't (in the sense
that it won't affect cluster operations much if some update turns out to
be buggy).
I personally do not advocate the fully conservative approach in most
cases, however. I've seen WAY too many problems arise from
over-conservative management of updates and upgrades -- that's how we
end up with operations that still use RH 7.3 as their base distro, which
would be funny if it weren't so very, very sad. Yet there is ALSO no
doubt that running the latest bleeding edge rawhide version of whatever
at all times brings with it a raft of problems. Somewhere in between
there is a happy medium -- one that keeps the software steadily
advancing so that users can take advantages of security fixes and
library, compiler, application, kernel/driver advances in a timely way
(at the expense, yes, of forcing people to periodically port their code
to a new e.g. libc or some other library instead of "freezing" thing
back in the 90's somewhere or worse) while still not changing things so
aggressively and needlessly that things are always partly broken
somewhere.
Most sites, I think, do this sort of balancing of costs and benefits
naturally and effortlessly, shaped by THEIR particular mix of people,
tasks, systems, administrators. It's not that big a deal, as long as
people don't get too convinced that their way is the only way to do
things "right". Especially REAL administrators (big-A, to connect to a
different thread), who can all too easily do the pointy haired boss
thing and dictate a fixed policy that is horribly distructive in the
long run.
> Most of the problems I've come accross like this arise from a lack of
> communication. I believe it's quite important for you to know why he wanted
> to do the upgrade, and for you to inform him about any problems or conflicts
> of interest that would result from the upgrade. Presumably, that is exactly
> what you did. My only complaint here is the impression you give that
> scientists like me want to upgrade software just for the sake of doing it.
> Please ask yourself why did the upstream maintainers release a new version?
> Was it just for the sake of upgrading it?
>
> I keep our software up-to-date because I want to ensure that all known bugs
> fixes and security upgrades are applied. I don't do it just because they are
> new. I rely on the package repository maintainers to decide when software
> should be upgraded, but I also 'pin' critical packages that I know are
> required to be held at a particular revision locally for some reason. I do
> advocate upgrading unless there is a reason *not* to do it. You seem to
> recommend the opposite of not upgrading unless there *is* a reason to do it.
> I wonder which strategy results in less work?
Exactly. This is my style as well, but the world has many different
kinds of environments in it and one style does NOT fit all. To give one
concrete example, banks and medical practices these days have their IT
regulated out the wazoo. If you are running a software environment,
cluster or not, for a bank you CANNOT change anything at all anywhere
without an EXTENSIVE testing/validation process -- I mean EXPENSIVE
testing/validation process -- designed to absolutely guarantee that the
change doesn't enable some horrible outcome with the bank's FDIC insured
deposits that would end up costing the Federal Government mucho dinero.
Banks do Not Run Fedora Core Whatever and never will. Medical practices
are subject to HIPAA and have or will soon have federal regulations
regarding best IT practice to comply with, once anybody knows just what
best IT practice is (at the moment it appears to be "use due diligence"
without a lot of concrete advice or requirements). They may well run
Fedora Core or equivalent, but may need a relatively fascist process of
testing and approving updates for all of that, just as they will likely
NOT run Windows 9x or WinXX without due-diligence antiviral/antispyware
installed and all that.
Some places MUST be topdown fascist. Some places are just one human and
a cluster (where fascist means nothing). Some places are complex shared
environments with lots of humans that just have to get along and may
need a relatively democratic way of managing that. Still other places
NEED bleeding edge stuff -- they are always getting new bleeding edge
hardware and want the new kernels that support it weeks before they
EXIST to support it and may even participate in the development of that
support out of sheer need and desperation.
So I think in summary that you are generally "right" in that you are
describing a very common, reasonable, mainstream cluster management
style -- one that I myself prefer. However, it is important to realize
that there are many "right" styles and that the best style for a given
environment isn't even a static thing but can easily change in the
twinkling of an eye as many of us experienced firsthand when the Opteron
first appeared without "proper" support in linux -- if you wanted it,
you had to ride the raw buggy wave of X86_64 development (and reap the
benfits at the cost of many hassles) or do without. I'm STILL seeing
that with AMD64's -- I have boxes at home that will ONLY install with
FC5, where trying to do FC4 or Centos results in mid-install lockup,
period. If I built a cheap cluster using those AMD64's, I'd pretty much
have to use FC5 or equivalent, and if I did that I'd HAVE to ride the
update wave pretty aggressively as the odd FC's tend to be a bit flakier
than the evens for a variety of reasons.
The main thing to do is not be closed minded in ANY direction when
designing, managing, using a cluster. There are lots of ways to make
things work, you just have to work at finding the one that works best
(most cost-efficiently etc) for you and your dynamic environment. This
list has always been and remains a lovely resource for discovering and
discussing those many alternatives.
rgb
>
> Best wishes,
>
> Tony.
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list