[Beowulf] Maui and PBS

Brian Smith brian at cypher.acomp.usf.edu
Mon Jun 7 07:50:23 PDT 2004


Hey guys,

I know that this is a question for the maui list, however, no one seems 
to want to or be able to answer this question.  Here is the e-mail i 
sent the maui list.  Any help at all will be highly appreciated.

Brian

########################

Hi all,

I received some help before with this problem but its been a while so I 
will restate.

I am trying to set up Maui along side Torque so that it will essentially 
divide the resources of the cluster between two research groups.  We 
have 42 nodes and 84 processors.  I'd like the first 21 nodes to go to 
the first group and the last 21 nodes to go to the second group.
Both groups should be able to run jobs on ALL nodes.  However, if the 
first group is running jobs on the second group's nodes and the second 
group needs to run a job on their own nodes, the second group's job 
should preempt the first group's job so that the second group will have 
priority over those nodes; and vice-versa.

The docs aren't all that helpful in dealing with situations like this.

Here is what I have so far: (btw, this is the config for a 4 node test 
cluster)

/usr/spool/maui/maui.cfg
*******************
SERVERHOST            wyrd.acomp.usf.edu

ADMIN1                root dan brs
ADMIN2                alfredo bspace
RMCFG[base]  TYPE=PBS
                                                                                                                 

RMPOLLINTERVAL        00:00:05
                                                                                                                 

SERVERPORT            42559
SERVERMODE          NORMAL

LOGFILE                     maui.log
LOGFILEMAXSIZE   10000000
LOGLEVEL                 3

QUEUETIMEWEIGHT       1
BACKFILLPOLICY            FIRSTFIT
RESERVATIONPOLICY   CURRENTHIGHEST

GROUPCFG[bsg] QDEF=bsqos
GROUPCFG[acg] QDEF=acqos
                                                                                                                 

SRCFG[bs]  OWNER=QOS:bsqos
SRCFG[bs]  FLAGS=OWNERPREEMPT
SRCFG[bs]  HOSTLIST=tsn001,tsn002
SRCFG[bs]  PERIOD=INFINITY
SRCFG[bs]  QOSLIST=bsqos,acqos-
SRCFG[bs]  GROUPLIST=bsg
                                                                                                                 

SRCFG[ac]  OWNER=QOS:acqos
SRCFG[ac]  FLAGS=OWNERPREEMPT
SRCFG[ac]  HOSTLIST=tsn003,tsn004
SRCFG[ac]  PERIOD=INFINITY
SRCFG[ac]  QOSLIST=acqos,bsqos-
SRCFG[ac]  GROUPLIST=acg
                                                                                                                 

QOSCFG[bsqos]  QFLAGS=PREEMPTOR
QOSCFG[bsqos]  QFLAGS=PREEMPTEE
QOSCFG[acqos]  QFLAGS=PREEMPTOR
QOSCFG[acqos]  QFLAGS=PREEMPTEE
                                                                                                                 

PREEMPTPOLICY   REQUEUE

*************
[root at wyrd torque]# qmgr
Max open servers: 4
Qmgr: print server
#
# Create queues and set their attributes.
#
#
# Create and define queue router
#
create queue router
set queue router queue_type = Execution
set queue router acl_group_enable = False
set queue router enabled = True
set queue router started = True
#
# Create and define queue batch
#
create queue batch
set queue batch queue_type = Execution
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 01:00:00
set queue batch enabled = True
set queue batch started = True
#
# Set server attributes.
#
set server scheduling = True
set server managers = brs at wyrd.acomp.usf.edu
set server managers += dan at wyrd.acomp.usf.edu
set server managers += root at wyrd.acomp.usf.edu
set server operators = root at wyrd.acomp.usf.edu
set server default_queue = batch
set server log_events = 511
set server mail_from = adm
set server query_other_jobs = True
set server resources_available.mem = 514316kb
set server resources_default.ncpus = 1
set server scheduler_iteration = 600
set server node_ping_rate = 300
set server node_check_rate = 600
set server node_pack = False

If there is anything else I need to include, please let me know.


-- 
Brian R Smith
University of South Florida
brian at cypher.acomp.usf.edu
_______________________________________________
mauiusers mailing list
mauiusers at supercluster.org
http://supercluster.org/mailman/listinfo/mauiusers



More information about the Beowulf mailing list