[Beowulf] Do these SGE features exist in Torque?
Glen Beane
glen.beane at jax.org
Mon May 12 10:45:09 PDT 2008
On May 12, 2008, at 1:11 PM, Reuti wrote:
> Am 12.05.2008 um 18:01 schrieb Craig Tierney:
>
>> Reuti wrote:
>>> Hiho,
>>> Am 12.05.2008 um 15:14 schrieb Prentice Bisbal:
>>>>>> It's still an RFE in SGE to get any arbitrary combination of
>>>>>> resources, e.g. you need for one job 1 host with big I/O, 2
>>>>>> with huge memory and 3 "standard" type of nodes you could
>>>>>> request in Torque:
>>> -l nodes=1:big_io+2:mem+3:standard
>>> (Although this syntax has its own pitfalls: -l nodes=4:ppn=1
>>> might still allocate 2 or more slots on a node AFAIO in my tests.)
>>
>> You mean the syntax has its pitfalls in Torque,
>
> How Torque implement it for now: With ppn=1 I want one core per
> node, but might end up with any other allocation.
...
> But requesting "-l nodes=4:ppn=2" could end up with an allocation 4
> +2+2.
With TORQUE this would be a Maui or Moab setting. By default these
schedulers will "reinterpret" a node request to try to improve
scheduling efficiency. It is possible to configure Maui and Moab to
match the node request exactly, rather than just find an equivalent
number of CPUs. What I would like to see added to torque is a -l
ncpus=X (or ncores=X, which I guess would be more accurate now that
most clusters use multi-core CPUs) where you can specify a number of
CPUs and let the scheduler decide where to get those cpus or still
allow -l nodes=X:ppn=Y for users that want to control the exact
number of nodes their job runs on
--
Glen L. Beane
Software Engineer
The Jackson Laboratory
Phone (207) 288-6153
More information about the Beowulf
mailing list