No subject

Thu Jun 12 22:07:40 PDT 2014

bye, Thommy

 -----Original Message-----

Received: from ([]) by with SMTP
(Microsoft Exchange Internet Mail Service Version 5.5.2650.21)
	id GHCTW5JW; Sun, 4 Mar 2001 02:44:58 -0500
Received: from (unverified) by
 (Content Technologies SMTPRS 4.1.2) with SMTP id
<T3f44f30d5213f15510 at> for <ThomasBoehme at>;
 Sun, 4 Mar 2001 02:45:01 -0500
Received: from [] (
	by with esmtp (Exim 2.12 #2)
	id 14ZTBy-0003Mq-00
	for ThomasBoehme at; Sun, 4 Mar 2001 08:44:34 +0100
Received: from [] (
	by with esmtp (Exim 2.12 #3)
	id 14ZTAJ-0006aO-00
	for mail at; Sun, 4 Mar 2001 08:42:51 +0100
Received: from (localhost [])
	by (8.9.3/8.9.3) with ESMTP id TAA19501;
	Sat, 3 Mar 2001 19:32:53 -0500
Received: from ( [] (may be
	by (8.9.3/8.9.3) with ESMTP id VAA23625
	for <beowulf at>; Wed, 28 Feb 2001 21:51:41 -0500
Received: from PickupDirectory by with SMTP (Microsoft
Exchange Internet Mail Service Version 5.5.2650.21)
	id FPFVJZL7; Wed, 28 Feb 2001 17:48:03 -0900
Received: from VGER.KERNEL.ORG ([ port:2021]) by
	Mail essentials (server 2.422) with SMTP id:
<331549 at>
	 for <lsawyer at>; Wed, 28 Feb 2001 2:08:52 PM -0900
	smtpmailfrom <linux-kernel-owner at> 
Received: (majordomo at by via listexpand
	id <S129350AbRB1XGF>; Wed, 28 Feb 2001 18:06:05 -0500
Received: (majordomo at by
	id <S129344AbRB1XF5>; Wed, 28 Feb 2001 18:05:57 -0500
Received: from
	"EHLO") by with ESMTP
	id <S129350AbRB1XFq>; Wed, 28 Feb 2001 18:05:46 -0500
Received: from localhost (newt at localhost)
	by (8.9.3/8.9.3) with ESMTP id SAA05593;
	Wed, 28 Feb 2001 18:06:37 -0500
X-Authentication-Warning: newt owned process doing
Date: 	Wed, 28 Feb 2001 18:06:37 -0500 (EST)
From: Daniel Ridge <newt at>
X-Sender: newt at
To: beowulf at
cc: Linux Kernel Mailing List <linux-kernel at>
Subject: Re: Will Mosix go into the standard kernel?
<Pine.LNX.4.33.0102271829030.5502-100000 at duckman.distro.conectiva>
<Pine.LNX.4.21.0102281732210.22184-100000 at>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Precedence: bulk
X-Mailing-List: 	linux-kernel at
Sender: beowulf-admin at
Errors-To: beowulf-admin at
X-Mailman-Version: 1.1
Precedence: bulk
List-Id: Discussion of topics related to Beowulf clusters
X-BeenThere: beowulf at

Fellow Beowulfers,
I have yet to hear a compelling argument about why any of them should 
go into the standard kernel -- let alone a particular one or 
a duck of a
The Scyld system is based on BProc -- which requires only a 
1K patch to
the kernel. This patch adds 339 net lines to the kernel, and 
changes 38
existing lines.
The Scyld 2-kernel-monte kernel inplace reboot facility is a 600-line
module which doesn't require any patches whatsoever.

Compare this total volume to the thousands of lines of patches that
RedHat or VA add to their kernel RPMS before shipping. I just 
don't see 
the value in fighting about what clustering should 'mean' or picking
winners when it's just not a real problem.
Scyld is shipping a for-real commercial product based on BProc and
2-kernel-Monte and our better-than-stock implementation of LFS and and
we're not losing any sleep over this issue.
I think we should instead focus our collective will on removing things
from the kernel. For years, projects like ALSA, pcmcia-cs, and VMware
have done an outstanding job sans 'inclusion' and we should more
frequently have the courage to do the same. RedHat and other 
linux vendors
have demonstrated ably that they know how to build and package systems
that draw together these components in an essentially reasonable way. 

	Dan Ridge
	Scyld Computing Corporation

On Tue, 27 Feb 2001, Rik van Riel wrote:
On Tue, 27 Feb 2001, David L. Nicol wrote:

> > I've thought that it would be good to break up the different
> > clustering frills -- node identification, process migration,
> > process hosting, distributed memory, yadda yadda blah, into
> > separate bite-sized portions.
> It would also be good to share parts of the infrastructure
> between the different clustering architectures ...
> > Is there a good list to discuss this on?  Is this the list?
> > Which pieces of clustering-scheme patches would be good to have?
> I know each of the cluster projects have mailing lists, but
> I've never heard of a list where the different projects come
> together to eg. find out which parts of the infrastructure
> they could share, or ...
> Since I agree with you that we need such a place, I've just
> created a mailing list:
> 	linux-cluster at
> To subscribe to the list, send an email with the text
> "subscribe linux-cluster" to:
> 	majordomo at
> I hope that we'll be able to split out some infrastructure
> stuff from the different cluster projects and we'll be able
> to put cluster support into the kernel in such a way that
> we won't have to make the choice which of the N+1 cluster
> projects should make it into the kernel...
> regards,
> Rik
> --
> Linux MM bugzilla:
> Virtual memory is like a game you can't win;
> However, without VM there's truly nothing to lose...
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or unsubscribe) 

To unsubscribe from this list: send the line "unsubscribe 
linux-kernel" in
the body of a message to majordomo at
More majordomo info at
Please read the FAQ at

Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) 

More information about the Beowulf mailing list