[scyld-users] how to let bbq migrate batch jobs to compute nodes?

Weirong Zhu wrzhu at etinternational.com
Mon Mar 13 22:23:04 PST 2006

We have just got our new Penguin Computing cluster.
Since one of our main purpose it to submit a lot of batch jobs to the 
cluster, I tried to learn how to use bbq provided by scyld.

As a simple test,

(1) I wrote a C program, which has a while(1) loop.  Then I compile it 
to generate the binary a.out.
(2) Write a simple job file with only one command "./a.out".  And name 
this file as run.
(3) submit the job by "batch now -f run"
(4) do step (3) a lot of times.

Then by using command "bbq" I saw a lot of jobs were listed.  And I 
assume those jobs would be migrated to computing nodes.

However,  when I use "beostat -C", find all the computing nodes are 
actually idle, and all those instances are running on master node.

Did I do something wrong to submit my simple batch jobs?
How should I do?

Moreover,  I tried to use "atrm" to delete my jobs from the queue. After 
that, when I use "bbq" command, there is nothing in the queue. However, 
when I did a "top" or "ps -fu myname".  Those jobs are still running on 
the master node. 
Did I do something wrong to delete a batch job from the queue?
How should I do?

I am really confused with the bbq batch system and it seems that there 
is no PBS avaliable on this cluster.

Any help and suggestions are welcome!


More information about the Scyld-users mailing list