i know im taking my own thread of topic but i just installed boinc and it is nice to see some of the projects using cuda now :). now i just have to deal with heating issue that all this computation is generating. also im super impressed with the performance of my processor as well.<br>
<br><div class="gmail_quote">On Wed, Apr 22, 2009 at 12:36 AM, Gus Correa <span dir="ltr"><<a href="mailto:gus@ldeo.columbia.edu">gus@ldeo.columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div></div><div class="h5">Glen Beane wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
<br>
On 4/21/09 3:46 PM, "Jonathan Aquilina" <<a href="mailto:eagles051387@gmail.com" target="_blank">eagles051387@gmail.com</a>> wrote:<br>
<br>
is it possible to have a single multicored machine as a cluster?<br>
<br>
<br>
<br>
That wouldn’t exactly be a cluster, would it? But you can certainly run a lot of the software associated with Beowulf clusters: a batch system (TORQUE, SGE, etc), MPI, ... so in practice you can use your 8 core workstation just like you would a cluster.<br>
<br>
<br>
-- <br>
Glen L. Beane<br>
Software Engineer<br>
The Jackson Laboratory<br>
Phone (207) 288-6153<br>
<br>
<br>
</blockquote>
<br></div></div>
Hi Jonathan, Glen, list<br>
<br>
Along the lines that Glen pointed out,<br>
I setup a dual-socket dual-core workstation here with OpenMPI<br>
and MPICH2, plus Torque, to run some of our atmosphere modeling code<br>
in batch mode.<br>
It is not really a cluster, but a workstation<br>
with some software characteristic of a cluster.<br>
<br>
We tend to have long series of atmospheric model runs, where<br>
each one-year simulation restarts from the previous state where<br>
the last run stopped.<br>
Each run can take, say, half a day to complete,<br>
and the whole series may take a week to a month to finish.<br>
Queuing the jobs up on Torque/PBS,<br>
and forgetting about them until the whole series is done is<br>
very convenient.<br>
<br>
This setup works fine as long as the workstation is relatively idle.<br>
However, if/when the owner decides to run heavy data analysis<br>
Matlab scripts interactively while the MPI jobs are running,<br>
then we get to memory contention, swapping, and all those bad things<br>
that kill performance and may even break MPI jobs.<br>
This "time shared" interactive activity, that is typically absent in cluster nodes, is germane to workstations.<br>
<br>
Fortunately, I could convince the workstation owner (who also wants<br>
the output of the atmospheric model runs) to do heavy interactive<br>
use only when there aren't jobs on the Torque queue.<br>
Or to suspend the job queue, wait for running jobs to complete,<br>
work interactively, then restart the queue.<br>
<br>
You can think of other heavy interactive use (e.g. streaming video)<br>
that can produce the same negative impact on MPI jobs,<br>
and you may need to adopt a similar<br>
policy to avoid conflict between interactive and batch use<br>
in your workstation, if you set it up "as a cluster".<br>
<br>
My two cents.<br><font color="#888888">
<br>
Gus Correa<br>
---------------------------------------------------------------------<br>
Gustavo Correa<br>
Lamont-Doherty Earth Observatory - Columbia University<br>
Palisades, NY, 10964-8000 - USA<br>
---------------------------------------------------------------------</font><div><div></div><div class="h5"><br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Jonathan Aquilina<br>