[Beowulf] Dual head or service node related question ...

richard.walsh at comcast.net richard.walsh at comcast.net
Thu Dec 3 18:34:39 PST 2009


In the typical cluster, with a single head node, that node provides 
login services, batch job submission services, and often supports 
a shared file space mounted via NFS from the head node to the 
compute nodes. This approach works reasonably well for not-too-large 
cluster systems. 

What is viewed as the best practice (or what are people doing) on 
something like an SGI ICE system with multiple service or head nodes? 
Does one service node generally assume the same role as the 
head node above (serving NFS, logins, and running services like 
PBS pro)? Or ... if NFS is used, is it perhaps served from another 
service node and mounted both on the login node and the compute 
nodes? Read-only? Is it better to support a shared file space via Lustre 
across all the nodes? 

The architecture chosen has implications ... for instance in the 
common case above PBS Pro would be installed on the head 
node, perhaps in the shared space, and its server and scheduler 
would be run by /etc/init.d/pbs off of the shared partition. The 
bin and sbin commands would shared by the compute nodes. 

In a case where the login service node and the shared file 
space, NFS service node are different, PBS installation 
must be done on the NFS service node in the case where 
the share space is mounted read-only and only the commands 
and man pages would be installed on the login node. What 
are the implications for other user applications the one would 
like to install in the share space for use from the login nodes? 
Some might have write requirements into the installation 
directory? Does this indicate that the NFS partition should 
be mounted read-write on the login node, but read-only on 
the compute nodes? 

Comments and suggestions, particularly from those that 
have set things up on SGI ICE cluster systems would be 
much appreciated. 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20091204/b90a0e6c/attachment.html>

More information about the Beowulf mailing list