<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Schoenefeld, Keith wrote:
<blockquote
cite="mid:5E0BB54BEC5EBA44B373175A080E64010A7948BF@ophelia.ad.utulsa.edu"
type="cite">
<pre wrap="">This definitely looked promising, but unfortunately it didn't work. I
both added the appropriate export lines to my qsub file, and then when
that didn't work I checked the mvapich.conf file and confirmed that the
processor affinity was disabled. I wonder if I can turn it on and make
it work, but unfortunately the cluster is full at the moment, so I can't
test it.
</pre>
</blockquote>
<br>
You may want to verify that the environment variable was actually
passed down to the MPI task. To set environment variables for MPI
jobs, I usually either specify the environment variable on the mpirun
command line or in a wrapper script:<br>
<br>
mpirun -np 32 -hostfile nodes VIADEV_ENABLE_AFFINITY=0 a.out<br>
<br>
mpirun -np 32 -hostfile nodes run.sh a.out<br>
<br>
where run.sh sets up the local environment including environment
variables.<br>
<br>
The second method is more portable to various shells and MPI versions.<br>
<br>
Shannon<br>
<br>
<blockquote
cite="mid:5E0BB54BEC5EBA44B373175A080E64010A7948BF@ophelia.ad.utulsa.edu"
type="cite">
<pre wrap="">
-- KS
-----Original Message-----
From: Shannon V. Davidson [<a class="moz-txt-link-freetext" href="mailto:svdavidson@charter.net">mailto:svdavidson@charter.net</a>]
Sent: Wednesday, July 23, 2008 4:02 PM
To: Schoenefeld, Keith
Cc: <a class="moz-txt-link-abbreviated" href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>
Subject: Re: [Beowulf] Strange SGE scheduling problem
Schoenefeld, Keith wrote:
</pre>
<blockquote type="cite">
<pre wrap="">My cluster has 8 slots (cores)/node in the form of two quad-core
processors. Only recently we've started running jobs on it that
</pre>
</blockquote>
<pre wrap=""><!---->require
</pre>
<blockquote type="cite">
<pre wrap="">12 slots. We've noticed significant speed problems running multiple
</pre>
</blockquote>
<pre wrap=""><!---->12
</pre>
<blockquote type="cite">
<pre wrap="">slot jobs, and quickly discovered that the node that was running 4
</pre>
</blockquote>
<pre wrap=""><!---->slots
</pre>
<blockquote type="cite">
<pre wrap="">on one job and 4 slots on another job was running both jobs on the
</pre>
</blockquote>
<pre wrap=""><!---->same
</pre>
<blockquote type="cite">
<pre wrap="">processor cores (i.e. both job1 and job2 were running on CPU's #0-#3,
and the CPUs #4-#7 were left idling. The result is that the jobs were
competing for time on half the processors that were available.
In addition, a 4 slot job started well after the 12 slot job has
</pre>
</blockquote>
<pre wrap=""><!---->ramped
</pre>
<blockquote type="cite">
<pre wrap="">up results in the same problem (both the 12 slot job and the four slot
job get assigned to the same slots on a given node).
Any insight as to what is occurring here and how I could prevent it
</pre>
</blockquote>
<pre wrap=""><!---->from
</pre>
<blockquote type="cite">
<pre wrap="">happening? We were are using SGE + mvapich 1.0 and a PE that has the
$fill_up allocation rule.
I have also posted this question to the <a class="moz-txt-link-abbreviated" href="mailto:hpc_training-l@georgetown.edu">hpc_training-l@georgetown.edu</a>
mailing list, so my apologies for people who get this email multiple
times.
Any insight as to what is occurring here and how I could prevent it
</pre>
</blockquote>
<pre wrap=""><!---->from
</pre>
<blockquote type="cite">
<pre wrap="">happening? We were are using SGE + mvapich 1.0 and a PE that has the
$fill_up allocation rule.
</pre>
</blockquote>
<pre wrap=""><!---->
This sounds like MVAPICH is assigning your MPI tasks to your CPUs
starting with CPU#0. If you are going to run multiple MVAPICH jobs on
the same host, turn off CPU affinity by starting the MPI tasks with the
environment variable VIADEV_USE_AFFINITY=0 and VIADEV_ENABLE_AFFINITY=0.
Cheers,
Shannon
</pre>
<blockquote type="cite">
<pre wrap="">Any help is appreciated.
-- KS
_______________________________________________
Beowulf mailing list, <a class="moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a>
To change your subscription (digest mode or unsubscribe) visit
</pre>
</blockquote>
<pre wrap=""><!----><a class="moz-txt-link-freetext" href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a>
</pre>
<blockquote type="cite">
<pre wrap="">
</pre>
</blockquote>
<pre wrap=""><!---->
</pre>
</blockquote>
<br>
</body>
</html>