<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7651.59">
<TITLE>RE: [Beowulf] openMosix ending</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>IMHO you don't need dynamic migration for embarassingly parallel applications as they can just be launched<BR>
on any available compute node directly and run there to completion. A simple queue system / scheduler<BR>
like torque or similar will be enough to make sure to not run more than cpus are available on a give node<BR>
at the same time in order to get best throughput. Just throw your 100 parametrized runs into the queue,<BR>
and the headnode/scheduler will keep all available nodes busy until all work is done.<BR>
<BR>
The hierarchical approach of classical beowulf works just fine for that.<BR>
<BR>
Michael Will<BR>
Sr. Cluster Engineer<BR>
Penguin Computing<BR>
-----Original Message-----<BR>
From: beowulf-bounces@beowulf.org on behalf of Tony Travis<BR>
Sent: Tue 7/17/2007 8:03 AM<BR>
To: beowulf@beowulf.org<BR>
Subject: Re: [Beowulf] openMosix ending<BR>
<BR>
Robert G. Brown wrote:<BR>
> On Mon, 16 Jul 2007, Jeffrey B. Layton wrote:<BR>
><BR>
>> Afternoon all,<BR>
>><BR>
>> I don't know how many people this affects, but I thought it was<BR>
>> worth posting in case people are using openMosix. The<BR>
>> leader of openMosix, Moshe Bar, has announced that the<BR>
>> openMosix project is ending.<BR>
>><BR>
>> <A HREF="http://sourceforge.net/forum/forum.php?forum_id=715406">http://sourceforge.net/forum/forum.php?forum_id=715406</A><BR>
>><BR>
>> While I haven't used openMosix, I've seen it used and it is<BR>
>> pretty cool to see processes move around nodes.<BR>
><BR>
> Yeah, but it has nearly always had a few tragic flaws. One was that it<BR>
> was always basically a hack of a specific kernel version and image,<BR>
> meaning that if you used it you were outside of a working kernel update<BR>
> stream. The second was that it was basically a hack of a specific<BR>
> kernel version and image at all, where one really would prefer a tool<BR>
> that did the same thing outside of kernel space (like Condor, for<BR>
> example). It survived those flaws, of course -- but it cannot survive<BR>
> the advent of virtualization, which will provide new pathways for this<BR>
> sort of thing to be done with far greater ease and stability.<BR>
<BR>
Hello, Robert.<BR>
<BR>
I've been using openMosix for a long time, and you're right about the<BR>
kernel 'trap' it puts you into. I recently 'ported' linux-2.4.26-om1 to<BR>
Ubuntu. Although I've succeeded in getting our 92-node Beowulf up and<BR>
running openMosix under Ubuntu 6.06.1 LTS the end of life announcement<BR>
means I have to start thinking about replacing it.<BR>
<BR>
Do you really think that Condor is an alternative to openMosix?<BR>
<BR>
I don't know much about Condor, but I thought is was a DRM (Distributed<BR>
Resource Manager) like SGE. Is it more than that?<BR>
<BR>
The great thing about openMosix is that most 'ordinary' programs<BR>
migrate. I've thought about using openSSI previously: What's your<BR>
opinion about that for 'embarrassingly' parallel computation?<BR>
<BR>
Best wishes,<BR>
<BR>
Tony.<BR>
--<BR>
Dr. A.J.Travis, | <A HREF="mailto:ajt@rri.sari.ac.uk">mailto:ajt@rri.sari.ac.uk</A><BR>
Rowett Research Institute, | <A HREF="http://www.rri.sari.ac.uk/~ajt">http://www.rri.sari.ac.uk/~ajt</A><BR>
Greenburn Road, Bucksburn, | phone:+44 (0)1224 712751<BR>
Aberdeen AB21 9SB, Scotland, UK. | fax:+44 (0)1224 716687<BR>
_______________________________________________<BR>
Beowulf mailing list, Beowulf@beowulf.org<BR>
To change your subscription (digest mode or unsubscribe) visit <A HREF="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</A><BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>