[Beowulf] Re: "hobbyists"es

Robert G. Brown rgb at phy.duke.edu
Sat Jun 21 14:09:37 PDT 2008


On Fri, 20 Jun 2008, Perry E. Metzger wrote:

>
> "Robert G. Brown" <rgb at phy.duke.edu> writes:
>> On Fri, 20 Jun 2008, Perry E. Metzger wrote:
>>> That limits the number of attempts that may be made against your
>>> particular machine. At the same time that they're attacking your
>>> machine, that one instance is attacking a vast number of other
>>> randomly selected boxes. There are also a vast number of the things
>>> running out there, so in the long run, they succeed quite a bit of the
>>> time.
>>
>> Yes, but only rarely, on a site that is actually registered with
>> nameservice,
>
> I don't understand what you mean by that...

A significant fraction of the sites that attack do not resolve as
hostnames.  For example:

rgb at lucifer|B:838#host 61.144.122.38
;; connection timed out; no servers could be reached

If I work very hard (WAY harder than it is worth -- every minute one
spends on this is "cost" in the security game) I can run traceroute,
whois, and so on and determine that this is PROBABLY an "unused" address
on a block of addresses in Beijing (61.144.122.109 resolves to
qinghuishiye.com in Beijing).  Perhaps it is an address belonging to
this company; perhaps it is an illegal tap and is an address being hand
set to route by some enterprising young Chinese person.  Who can say?
Who really cares?

>>> From the evidence, they almost never succeed in the US,
>
> A few days ago I informed an ISP in Florida that one of their servers
> was running an ssh brute force agent, and I find that sort of thing
> often enough that I don't think you're correct.

I might be wrong, sure.  Anecdotal evidence, and truthfully it isn't
worth my time to SERIOUSLY analyze logs from enough machines for enough
time to come up with an authoritative or even statistically valid
answer.  On the one host I can easily check up on at this moment without
wasting still more time, making a single pass on the current
/var/log/secure, four out of six attacker addresses have no reverse
mapping (that is, either "unregistered" altogether or at the end of some
ISP's DHCP space where they don't bother establishing a reverse lookup).
One of these, as I establish above, is PROBABLY in Beijing judging by
the routing.  Two have addresses with a reverse lookup.  Of the
addresses with reverse maps established and a discoverable name and
whois record, one is in ecuador, the other in korea.  I would bet five
bucks even money that all six of today's attackers are outside the US.

Still, that doesn't mean that the never succeed in the US -- that's
poorly put.  It means that -- assuming that the casual sample above is
reasonably extrapolable -- the odds are perhaps 10 to 1 (or 3 to 1, or 8
to 1, or 15 to 1 -- small sample, big expected variance) that any given
attacking host is outside the US.  That may still leave a large NUMBER
of attacking hosts inside the US, of course.  Given the probable
prevalence of computers inside the US to outside the US -- especially in
the relatively small, relatively poor countries typically represented in
my anecdotal attacker pool, either the probability of success in the US
is in fact dramatically reduced (as I expect that there may well be as
many computers on the Internet in the US and Europe as there are
computers in all the more prevalent attackers combined) or some
assumption made above is dramatically wrong.

For example (to do the Bayesian analysis in more detail) if we assume
that there are 3x as many internet-connected reverse-lookup resolvable
hosts in the US and Western Europe as there are in the complementary set
or countries (whether or not their addresses can be resolved), but 90%
of all attacks com from the latter set, then if I'm doing my arithmetic
correctly -- always subject to doubt;-) -- it is 27x more likely for a
non-US-WEurope host to be attacking than it is for a US-WEurope host.

That doesn't mean that the e.g. Chinese or Korean hosts are compromised,
of course.  They could easily be primary -- people working this as a
job.  $5 for every host you crack in the US and successfully insert the
following spambots selling the following products.

The point is that even with their indifferent skills and determination,
every ISP in the US has to sign a whole set of AUAs to join the Internet
at all, and every one of those from the toplevel backbone sites on down
has explicit rules and sanctions associated with spam and bots
originating on subnetting hosts.  Those rules and whois make it
"mandatory" to try to police your hosts, with the threat of removal
(effectively death, to an ISP) if you can't do an acceptable job.  And
people DO try to police their hosts, again within reason.  Sure, there
are networks that are bleeding wounds, but they fairly quickly end up on
mail blacklists, customers complain, the ISP is shocked into taking more
professional action, the problem clears up, and things work again.
Problems in the US with a modest amount of responsibility and
accountability TEND to have a reasonably short lifetime, because people
like you and the many other sysadmins I know WILL take the three minutes
it takes to contact the whois person of the responsible entity IF we can
find that entity in no more than two minutes of effort, which is usually
the case for reverse-resolvable names and is a complete waste of time
trying if not (even if occasionally you can run one down).

My own touchstone process isn't /var/log/secure, actually.  It is SPAM.
I have the "honor" of having had a spambot-grazable email address fairly
prominently represented on this very list for eleven or twelve years
now, and while I'm probably not a world record holder I get a shitcan
full of spam every day.  The tiny fraction that makes it past
spamassassin is still tens to low hundreds of messages a day.  My
highwater mark was something like 10 MB of spam a day (with days of 20
MB, just to little old me).  Since then, I've been rebuilding my
personal webpages with my email address obfuscated and Duke has been
prescanning and eliminating the worst of the blacklist spam before it
gets to SA, and I'm down to only a MB a day or thereabouts, plus
leakage.  To see where the viral spambots currently live, I just have to
toggle on headers for a second before killing a spam message, or filter
out my spam garbage can from the last month.

Here, I could generate impeccable statistics as I am a virtual coal-mine
canary, (although at the moment my sample is unfortunately biased as I'm
no longer getting 80% of the spam even to where I can filter it) but it
is still in the why bother category.  22% of what gets through has
probably forged addresses.  31% is unknown (not resolveable).  10% is
from Russia (and registered). 7% is from Brazil.  3% from Argentina. 3%
from India.  Tiny fractions come from China, Korea, Taiwan (Ha! -- look
at those unresolvable and forged addresses...;-).  An amazing 12% comes
from Italy, of all places, and a solid 6% comes from Turkey.

Just adding up the fractions from THESE registered foreign addresses
(there are more) and we've got nearly all of the spam that COMES from a
registered address at all coming from a registered FOREIGN address.
This is no doubt a biased sample at this point because I don't know the
prefiltering parameters -- maybe they're removing all the US addresses
and letting through only unregistered, forged, and foreign addresses.
If, OTOH, the filter isn't biased w.r.t country, I think that it is safe
to say that AT LEAST 90% of all viral spam originates outside of the US
and most of Western Europe (for shame, Italy!).  Given the probable bias
in the numbers of connected systems (surprisingly difficult to find
aggregate numbers, sorry -- Korea has the highest percentage of personal
ownership but the US has the raw numbers even before counting the
corporate machines) I think that it is fairly safe to conclude that --
unsurprisingly, really -- all of the problems associated with unmanaged
machines:  virus infection, spambot infection, spyware, worms, phishing,
snooping, man-in-the-middle -- are far, far worse in countries other
than the US, Canada and Mexico (not a lot of Mexico, perhaps
surprisingly), most of Western Europe, Australia -- the "first world" of
yesteryear.

Personally, I wouldn't be suprised if Windows is sold pre-hacked in e.g.
China -- get your copy of stolen Windows with our own special spyware
and spambots preinstalled to use your network when you aren't, only a
dollar...;-)

The conclusion is that the antispam measures taken in this country are,
for the most part, actually working.  There are doubtless many
incidents, but if so they have a short lifetime and are relatively
quickly x'd out of the internet until they are resolved.  Antispyware
measures I can't address -- linux being mostly not vulnerable and easy
to audit, I've never had reason to think my machines are infected, and
Windows (even my own Windows systems) I truthfully don't give a rat's
ass about.  I'd believe any evil rumor until it was proven wrong by an
unbiased, expert, third party.  And hey, maybe it is true that there ARE
thirty million unwilling US machines in the russian mafia supercomputer
that we kicked around (that's close to as many as there are in all of
Korea) just waiting to to be turned to Evil Purposes, but if true, I'll
bet that a tiny, tiny fraction of them run Linux.

>> when they do they almost NEVER succeed on a machine that is
>> professionally managed,
>
> The ISP seemed reasonably professional. Unfortunately they have to let
> their web hosting customers log in with passwords...

"Have to let"?

> If they can't use public key auth, give 'em secure ids or something
> similar. Works fine or such purposes. Passwords are dead.

Yeah, Bill Gates (among others) said something like that back in 2004.
I confess to being deeply skeptical.  Really.  The SecureID solution has
been around for a long time at this point.  It was a PITA a decade ago.
It is a PITA now.  Expensive, too.  And then, people have to
authenticate to so MANY things nowadays.  I have to authenticate to my
cell phone. To order pizza.  To do banking online.  To shop at X, Y or
Z.  Then there is logging onto systems I work on -- something that IS
possible for me without a password.  The problem there is that many of
the systems I'm logging in from are laptops (I have two personally,
about to make that three).  The laptops themselves then become a
security risk if they are stolen, so I tend to LEAVE passwords on WITH
ssh on servers that are likely to have sensitive data on them -- a
cracker may get my keys, but they will likely not be able to exploit
them in the window before I change them even if I don't discover the
theft for a day or a week.

I personally have no good crystal ball feeling for where authentication
is going.  The password, flawed as it may be, has been flawed in exactly
the ways it is currently flawed in more or less forever -- the only
difference is in how many characters you have to have to be immune to
brute force attacks, which alas Moore's law eats into.  All the
alternatives I've heard of, however, either a) don't scale, at least
without something really scary like an international personal keyserver
system -- oh, wait, that doesn't scale well and has its OWN set of
problems; or b) are horribly inconvenient and expensive; or both.  Like
SecureID.  Passwords scale well, give people immediate and direct
control, are cheap, are convenient.  I think that there will be
tremendous resistance to the point of just plain ignoring any attempt to
change, but I could be wrong.  We'll see.  Maybe something really new
will emerge.

If you think differently, please advise and explain.  Ideally with a
discussion of scaling and cost-benefit analysis.

>> Cracking happens.  Such is life.  Almost nothing you can do on an open
>> network with hundreds of users will completely prevent it, although if
>> you want to spend money like water you can significantly reduce it.
>
> You can make it rare enough not to worry much if you are willing to
> do fairly mundane things, but most people don't.

It IS rare enough not to worry much, except on consumer machines.
Humans are innately lazy, natural optimizing systems.  In any managed
network, the sysadmins expend precisely enough effort (time and/or
money, most of it other people's money when possible) to reduce the rate
of successful cracking to where a) it covers their ass with their
bosses; b) incidents are rare enough not to be a PAIN in their ass that
never stops; c) the effort to prevent still more successful attacks
starts to exceed the time saved by the prevented attacks.

Of course people don't usually think about it quite that way.  They just
do it.  Getting cracked all the time?  Boss getting mad?  Better up the
spending on security, learn how to stop the attacks, institute a rigid
policy against e.g. browsing the web from work or opening non-internal
attachments, invest in mail prescanning and filtering.  Maybe hire a
guru.  Never get attacked, everything fine (as far as you know)?  You
might be wrong, but as long as nobody complains and the work gets done
and as long as you can document that you're doing what you CAN do or
SHOULD be doing according to standard of practice, you don't really
care.  Or you do care, but don't think it is happening strongly enough
to spend a month of effort and lots of money to find out.

Cost-benefit is all that matters, not "security".  Probability of loss
times expected cost of loss, marginal cost of each preventive measure
balanced against it weighted by expected marginal savings from fewer
incidents.  There are nominal exceptions to this -- mostly legally
mandated ones -- but even there you simply add more costs (going to
jail, massive fines, lawsuits) to the same process.

I'm guessing that in professionally managed Unix/Linux operations --
like those maintained by most of the readers on this list -- most of the
managers DO a lot of those mundane things already.  Probably not
precisely the same mix that you use, but then there are many ways to
skin this particular cat, and variance is a good thing as it permits a
more dynamic adaptation to the targeted variations tried out by hackers
-- when one line of defense fails, there are others that the hacker may
not have managed to cancel as effectively.  Linux "out of the box" is
far, far more secure these days than it used to be -- default rather
closed, rather than open, selinux enabled, only sshd open if that, nag
to install a prescanned root password and at least one prescanned
password protected account.  If one doesn't openly work to defeat its
default security (or try to much, too ignorantly) a "normal" user of
even an UNMANAGED (by a knowledgeable professional) box will not be
terribly easy to crack from outside, I think.  If they leave yum's
nightly update on (the default) for e.g. fedora or centos and the box is
left on, on a high speed internet connection (to maximally facilitate
access) and completely unattended I doubt that anybody could crack it
from the wire side in a year of trying.  It's not IMPOSSIBLE, of course
-- sshd can have its exploits too -- but again, the window would be open
for a very short time and it's a very boring box.

Not that I'd recommend this for a bank server or so on, where somebody
might be lurking and trying hard to open the door and just waiting for
an ssh exploit to give them the key -- of course not.  I'm just pointing
out that even a default numb-nut linux is pretty darn secure compared to
any other operating system that has ever existed to be installed by
individual users, and that in the hands of a professional the security
level is LIKELY to only go up.  Unless, of course, the professional
requirement is to install a wiki, a blog server, a mysql database, a
webserver, an NFS server, and six other open port applications on the
single server that (naturally) contains all the personal information of
all the participants, including their SSN and credit card numbers;-)

I >>know<< that there are people that are just this dumb out there.  The
Duke Law School just permitted the SSN's of a bunch of law school
applicants to be stolen, no so much because of a failure in SYSTEM
security as because a staff person didn't realize that when you post
them to certain web-based group chat tools (shared by a committee) they
are publically available and bound to be grazed, indexed, and rendered
instantly searchable overnight by webbots galore.  Even with the best
security model in the universe, it is hard to idiot-proof the world
(even by educating them so they are no longer idiots).

However, one has to think of it as an ecology as you noted.  Evolution
in action.  Survival of the fittest, correlary death of the unfit.
Self-correcting system -- lots of negative feedback.  Even closing a
barn door after a horse gets away may well keep you from losing future
horses, but the loss of a horse doesn't always justify installing
GPS-trackers in all your horses and hiring personal nannies carrying
submachine guns to take them to water.  It MIGHT justify installing a
spring and latch on the door, and it definitely justifies publicly
buffeting the ears of the careless horse-barn-door-non-closer and
uttering in deep tones "Next time, close the damn door so the horse
doesn't get out!"

> It is fairly rare in the circles I travel in for people to use
> password based remote access. Hardware tokens and multi-factor auth
> took over years ago. I'm talking about systems with tens of thousands
> of users doing remote access, too.
>
>> I avoid passwords myself when I can and choose strong ones when I
>> can't and cross my fingers either way.  But a professional sysadmin
>> managing a corporate, university, private, public network almost
>> invariably has to support userid/password based access,
>
> Not really, no. Tokens are cheap for remote access.

I'll have to revisit this.  I do know that people use them.  There is
one in the house for a particular site.  My impression was a minimal
cost of $10's of dollars per user on up, per site you want to access
this way, plus a whack for the server side stuff.  Is this reduced?  Or
is this what you call "cheap"?

> Hardware tokens, and multi-factor auth. The tokens these days fit on a
> key ring. I know places with more users than you have and they're
> happy with the solution. It is reasonably economical. I also realize
> it won't happen on your network, but that's probably not because it is
> economically infeasible.

It hasn't HAPPENED on Duke's general network.  The last time I looked at
e.g. secureid it was prohibitively expensive and a total PITA, but that
was years ago.  I know that they've reduced size and made the fobs fit
on keyrings, but it is still another piece of hardware to carry around
and track, times the number of places (distinct domains) you have to
access in this way.  Multifactor ties it to yet another piece of
hardware, doesn't it, e.g. a cell phone?  Both of these assume a strong,
centralized, organization wide authentication system, which is likely
reasonable for a corporation but rare at Universities.

Duke has kicked this around before (I helped do the kicking) but I don't
mess with enterprise security much anymore -- it is depressing and as
far as I'm concerned the enterprise solution starts by saying "use only
linux" anyway, and everybody else says NO, we have to use WINDOWS for
this application or that application, and the security game is more or
less lost before it properly begins.  Fortunately Duke doesn't MANDATE
the use of Microsoft products (even if Melinda Gates IS a Duke
graduate:-) and a lot of Duke -- nearly all of the sciences and
engineering, a lot of the students, a few other departments here and
there -- runs almost exclusively linux.

Which is why I keep saying -- we just don't have much of a problem here,
and it isn't because we are ignorant fools who are all cracked and don't
know it.  There are some very smart and extremely paranoid systems
people who work on campus, with enterprise-level tools looking for
problems.  Windows boxes get cracked all the time and become e.g.
suppurating wounds of warez and copyright violations.  They are
typically discovered, the systems admins responsible are informed (if
there are any -- Duke has a half-inside/half-outside wireless network
required to facilitate student access and lots and lots of students and
the vast majority of all campus incidents are in the dorms with student
run Windows boxes, with a lesser number coming out of departments with
minimal or shared windows management) and they are taken off the network
until they are fixed within hours of discovery.  Not quite minutes --
but then, we aren't really centrally managed, which gives us
considerable defensive robustness at the expense of less immediate
control.

It may be time to kick the fob idea around again.  I still think it will
end up costing Duke a million dollars a year, easy, to implement it --
we're talking 30 to 50 thousand users of all classes, in an
multi-operating system distributed management environment that LACKS a
flat userid space (scalability).  Issuing a fob per department one has
access rights in would be insane.  Duke DOES have a centralized auth
facility (used to authenticate a variety of access rights to
confidential material) and it would very likely be quite reasonable to
use it there), but I don't know if the linux folks would really trust
single-sign on authentication for regular logins from this facility,
presuming the userid mapping problem could be or has been solved.  IIRC
linux still uses 16 bit uids which is an obvious and immediate problem
without some degree of domain segmentation and id translation.

>> IMO, we are quite possibly moving towards a "healthy world" on the
>> internet.  The problem we face is understandable, the linux solution is
>> remarkably robust (and could be and is being made even more so).
>
> I have my doubts. The problem appears to be getting much worse with
> time from where I stand. I probably see more horror on a regular basis
> than you do, though.

It sounds like it;-)

I hope you don't mind my debating with you and disagreeing on some of
the things you say, by the way.  I'm not trying to flame or fight a war
to prove I'm right, I'm picking your brains (in part, by seeing how you
refute some of the things I say, in part by just listening to them).
And I'm guessing others on the list are interested as well -- it may not
be "specifically" cluster oriented, but clusters are nearly invariably
openly accessible through at least one portal, and represent a
potentially valuable resource once one gets through the portal(s) even
if they don't contain valuable data per se (and sometimes they do).

You actually sound like precisely the kind of wild-eyed paranoid that
can be extremely valuable to any organization that is concerned about
enterprise level security.  It sounds like you have a security
consulting business.  Is that what you do?  I may have a potential
client for you if so, contact me offline.

The one thing I haven't heard you address is the cost-benefit associated
with any particular set of security measures, especially on a broad
basis.  For example, as I noted, Duke is quite heterogeneous, and I
personally would have to declare jihad on anyone that recommended that
we adopt a central management scheme as the first step towards
enterprise security for reasons too numerous to mention (but ones that
if you've been around a while I don't HAVE to mention as they are common
to many enterprises:-).

CBA requires numbers, numbers that justify your paranoia.  In a
"typical" university department (or corporate department, or small
business, or whatever) running linux, how many successful attacks are
there per registered IP number (one exposed to the whole internet) or
whatever other metric you like?  How many of the attacks succeed because
of e.g. open ports with buffer overwrite attacks, sniffed/stolen
passwords (in my anecdotal experience the number one point of entry on
linux boxes), how many root promotions succeed, rootkits are detected on
a boot from a USB key followed by a security scan, etc?  What is the
cost (average, estimated, whatever) of these incidents, including
detection and repair?  What is the cost (real, potential, whatever) of
not discovering them?  And so on.

So far as my own experience is concerned, I've seen a fair number of
attacks that succeeded over 20+ years through many channels, with stolen
passwords overwhelmingly dominating over that time (although the problem
was tremendously ameliorated when Duke went to ssh-only or ssl-only --
bidirectional encryption including the authentication sequence -- for
remote access across the entire campus as rigid policy).  That dropped
the problem from several a year to a maybe one every few years.  The
worst incident I recall in our own department involved an exploit of the
portmap maybe a decade ago -- this was pre-yum.  Our sysadmin had quit
to take a new job, I had taken over (again) because I could while we
searched for a new admin (that would turn out to be Seth Vidal,
hooray:-), and he'd left me with four unpatched systems that got nailed
and rootkitted.

This worst-case crack -- for our scale of operation -- took me two days
to discover, deconstruct across the department, and clean up.  Maybe
another week of intense paranoia as I looked for leftovers, forced user
password changes, and multiply scanned the servers (which were not
cracked).  Root promotion on our clients didn't really get you much
unless it was on my system, and I may not sound like it but I'm a bit
paranoid too.  I've personally been cracked twice, lifetime, to my
knowledge -- the first time by the TRULY ancient emacs bug and it wasn't
my fault or on my system -- I actually caught the perp, tracked him back
home (he was at Duke), contacted his sysadmin (a friend of mine) and got
his wrist SOLIDLY whacked, by golly, but they wouldn't kick out a
computer science grad student even for nefarious evil, sigh.  The second
time was my own sloppiness on my home network -- I ran an externally
available webserver on my DSL line just for the fun of it, and failed to
update apache and got slammed.  But then yum came along and took the
guesswork out of staying patched.

Now, two days of my time -- or even a week -- cost Duke a truly pitiful
amount (I'm embarrassed to say:-).  Most cracks take even less time to
resolve.  A server crack might cost more, but we've never had one in our
department, and the ones I've helped deconstruct in other departments
WERE more expensive (in part because they spent as much as a week of my
time then, along with a couple of other people's time to boot:-) but
we're still talking a few thousand in opportunity cost time.  Damn near
"free" in real dollars, in other words.

The ante has gone up with the stricter security requirements and legal
liability issues of modern times, but at the department level we don't
have too much exposure, especially since we can easily demonstrate due
diligence and then some.  So the CBA as >>I<< see it is that using
SecureID for departmental access would cost some thousands of dollars
for the initial setup and hardware, some thousands of dollars of
"effort" to get it all going, and some (hundreds?) of dollars in ongoing
hardware replacement and opportunity cost labor for maintenence a year.

This would save us (assuming it prevented 100% of all successful cracks)
-- a few hundred dollars in expended opportunity cost labor a year,
based on historical costs.  Plus, of course, an empirically small chance
of a much more expensive cracking incident that penetrated our servers
or caused real losses somehow (security rule number one being "keep good
backups", hopefully making that probability rather small).

So I'm still not seeing it.  Every user is also inconvenienced to some
degree by the system, having to carry their fob with them (and
inevitably losing or forgetting it or breaking it), which is a
nontrivial and poorly scaling cost right there.

If empirically this changed -- cracking rates due to getting passwords
went up -- we wouldn't have to be TOLD it implement something more
expensive to reduce the rate.  We'd do it just because at that time it
would make sense to do it.  IMO, the appropriate level and expenditure
of security is individual to each individual network and group of users;
it isn't just "everybody should use secureid" or any other particular
measure.

> For myself, I personally am too paranoid to use a keyboard I've left 
> out of my control for more than a trivial amount of time. I use ssh 
> with public key auth only.

> I'm a believer in a different kind of firewall -- the kind that blocks
> everything except the small number of things you know you need to let
> through. One wants a firewall, not a firesieve... :)

Amen, brother!  Give us an Amen, everybody!  Amen!

Drill one hole through on port 5002 or the like that leads straight to
the sshd and is otherwise used for a moribund or little used application
(or nothing at all).  Instant peace and quiet.  Block all access to
internal (LAN) ports like NFS from the outside, and keep a sucker rod
handy for anybody that messes with it on the inside.  Ditto for e.g.
webservers.  If possible, put them on the external network.  Otherwise,
drill a hole.  If possible, virtualize.  If not, be prepared to be
vigilant(er) and paranoid(er).  Add ports and measures and costs as
makes sense, with the maxim being less exposure is always better, and
ssh tunnels ports so why would anybody need anything more?  (Not quite
true or reasonable, of course, but true for experts, anyway...:-)

>> I think that our problem is that I have been prepending the word LINUX 
>> mentally to our discussions.  LINUX networks are not so commonly set up 
>> by people who know nothing.

> Ubuntu is rapidly helping with that.  :) 
> 
> 
> Perry

I actually don't agree that linux would prove anywhere nearly as
vulnerable as Windows has been historically even if they switched market
share tomorrow, and Ubuntu was the only version of linux used.  After
all, as YOU pointed out, MS left Explorer unpatched for 9 months.  NINE
MONTHS!  Say what?

Find a similar exploit in (say) Firefox.  Now MAYBE it wouldn't, in
fact, be found on Wednesday and get patched everywhere by Friday.  It
MIGHT take to the following Monday, or even the following Wednesday.
But who seriously thinks that it would take nine months?

There are reasons why one would expect Linux to respond optimally
rapidly for pretty much any exploit -- literally as rapidly as it is
possible to respond.  A huge base of world class programmers who use it,
for example, many of whom would fix the damn bug themselves if it didn't
get fixed in a timely way by others, if they relied on the tool or
service.  The fact that the code is right there and open for scrutiny.
The fact that many of those world class programmers and system and
network administrators sleep with a pistol under their pillows and sit
with their backs to a wall in restaurants (and require occassional
medication) as an external expression of their paranoia (ensuring that
it is actually not that likely that an broad exploit would remain
undetected for long -- I'll bet that there are plenty of places that
maintain sentinal sites and externally monitor the hell out of them, a
thing that VMware makes possible with a SINGLE SYSTEM -- you can
actually run a network server in one VM, run its traffic monitor in
another, and offer no ports at all in the toplevel OS).

Virtualization is going to change everything, by the way.  Very, very
soon.  I predict that the standard linux install will be:

Linux (VM manager, also monitor VM).  Think "this is my expert system
monitor" that does absolutely nothing but watch network traffic patterns
from slave VMs and manage access to resources as requested.
  |
  |-Linux (network services VM, completely disjoint from trusted space,
chroot on serious steroids, quite possibly read only VM image and
certainly snapshotted for instant restore to a known clean state).
  |
  |-Linux (userspace VM).  Accesses network services as a trusted host
on a completely imaginary internal network with one way trust.  No way
back to this VM from the services VM, lots of bells that ring on an
attempt).
  |
  |-Linux (optional/additional VMs).  Even a Windows VM.  This is the
only way I run Windows these days, when >>I<< have a choice.  It is
lovely.  Boot by clicking a VMware button.  Do what I need to do.  Shut
the sucker down.  Decide whether or not to snapshot it, or update it, or
just leave it frozen (annoying, actually, as it fails to update and
write back to the system so it always complains about being out of
date).  Use Explorer only if your life depends on it -- firefox or
galeon is Ctrl-Alt and a couple of clicks away.  In fact, use ANY
software under Windows only if life or money depend on it.  And who
cares -- if it is cracked as all hell, you can easily see it on the
toplevel/router system by tracking its connections, you can easily fix
it by backing up to the clean original image, and if you limit its
access to userspace to a tiny window on essential shared storage, you
leave virtually no probability of a windows exploit opening a path back
to your linux workspace.

If you are truly paranoid, I'd recommend giving this a try.  It's easy
to set up, costs you a tiny bit of performance (truly irrelevant on most
multicores), and lets you set up and run your very own Raving Paranoid
Gateway network traffic monitor on the toplevel linux install and
otherwise leave anything from absolutely nothing open to sshd on any
linux level or windows level you choose.

It's really, really hard to crack a site invisibly when every IP number
that talks to it or that it talks to is isolated in real time and
compared to a list you set up and control and sets off all sorts of
alarms if any sort of anomalous or unapproved pattern occurs.

In fact, this isn't a bad setup for a cluster gateway system.  There.
It's even OT...;-)

     rgb

-- 
Robert G. Brown                            Phone(cell): 1-919-280-8443
Duke University Physics Dept, Box 90305
Durham, N.C. 27708-0305
Web: http://www.phy.duke.edu/~rgb
Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977



More information about the Beowulf mailing list