[Beowulf] Configuring nodes on a scyld cluster

Michael Muratet mmuratet at hudsonalpha.org
Tue Aug 25 15:13:08 PDT 2009


On Aug 25, 2009, at 1:40 PM, Andre Kerstens wrote:

> Michael,
>
> On a cluster running Scyld Clusterware (are you running 4 or 5?) there
> is no need to install any Ganglia components on the compute nodes: the
> compute nodes communicate cluster information incl. ganglia info to  
> the
> head node via the beostatus sendstats mechanism. If ganglia is not
> enabled yet on your cluster, you can do it as follows:
>
> Edit /etc/xinetd.d/beostat and change 'disable=yes' to 'disable=no'
> followed by:
>
> /sbin/chkconfig xinetd on
> /sbin/chkconfig httpd on
> /sbin/chkconfig gmetad on
>
> and
>
> service xinetd restart
> service httpd start
> service gemetad start
>
> Then point your web browser to http://localhost/ganglia and off you  
> go.
Andre
Thanks for the info. Yes, I got that far. It is apparently also  
necessary to reboot the head node, and we're waiting for a slack  
moment to do that.
Cheers
Mike
>
>
> This information can be found in the release notes document of your
> Scyld cluster or in the Scyld admin guide.
>
> Cheers
> Andre
>
> ------------------------------
> Message: 2
> Date: Mon, 24 Aug 2009 04:40:22 -0500
> From: Michael Muratet <mmuratet at hudsonalpha.org>
> Subject: [Beowulf] Configuring nodes on a scyld cluster
> To: ganglia-general at lists.sourceforge.net
> Cc: beowulf at beowulf.org
> Message-ID: <93AC2CF8-3096-487E-BC08-FBC644C5C62C at hudsonalpha.org>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
> Greetings
>
> I'm not sure if this is more appropriate for the beowulf or ganglia
> list, please forgive a cross-post. I have been trying to get ganglia  
> (v
> 3.0.7) to record info from the nodes of my scyld cluster. gmond was  
> not
> installed on any of the compute nodes nor was gmond.conf in /etc of  
> any
> of the compute nodes when we got it from the vendor. I didn't see much
> in the documentation about configuring nodes but I did find a  
> 'howto' at
> http://www.krazyworks.com/installing-and-configuring-
> ganglia/. I have been testing on one of the nodes as follows. I copied
> gmond from /usr/sbin on the head node to the subject compute node / 
> usr/
> sbin. I ran gmond --default_config and saved the output and changed it
> thus:
>
> scyld:etc root$ bpsh 5 cat /etc/gmond.conf
> /* This configuration is as close to 2.5.x default behavior as  
> possible
>    The values closely match ./gmond/metric.h definitions in 2.5.x */
> globals {
>   daemonize = yes
>   setuid = yes
>   user = nobody
>   debug_level = 0
>   max_udp_msg_len = 1472
>   mute = no
>   deaf = no
>   host_dmax = 0 /*secs */
>   cleanup_threshold = 300 /*secs */
>   gexec = no
> }
>
> /* If a cluster attribute is specified, then all gmond hosts are  
> wrapped
> inside
>  * of a <CLUSTER> tag.  If you do not specify a cluster tag, then all
> <HOSTS> will
>  * NOT be wrapped inside of a <CLUSTER> tag. */ cluster {
>   name = "mendel"
>   owner = "unspecified"
>   latlong = "unspecified"
>   url = "unspecified"
> }
>
> /* The host section describes attributes of the host, like the  
> location
> */ host {
>   location = "unspecified"
> }
>
> /* Feel free to specify as many udp_send_channels as you like.  Gmond
>    used to only support having a single channel */ udp_send_channel {
>   port = 8649
>   host = 10.54.50.150 /* head node's IP */ }
>
> /* You can specify as many udp_recv_channels as you like as well. */
>
> /* You can specify as many tcp_accept_channels as you like to share
>    an xml description of the state of the cluster */  
> tcp_accept_channel
> {
>   port = 8649
> }
>
> I modified gmond on the head node thus:
>
> /* This configuration is as close to 2.5.x default behavior as  
> possible
>    The values closely match ./gmond/metric.h definitions in 2.5.x */
> globals {
>   daemonize = yes
>   setuid = yes
>   user = nobody
>   debug_level = 0
>   max_udp_msg_len = 1472
>   mute = no
>   deaf = no
>   host_dmax = 0 /*secs */
>   cleanup_threshold = 300 /*secs */
>   gexec = no
> }
>
> /* If a cluster attribute is specified, then all gmond hosts are  
> wrapped
> inside
>  * of a <CLUSTER> tag.  If you do not specify a cluster tag, then all
> <HOSTS> will
>  * NOT be wrapped inside of a <CLUSTER> tag. */ cluster {
>   name = "mendel"
>   owner = "unspecified"
>   latlong = "unspecified"
>   url = "unspecified"
> }
>
> /* The host section describes attributes of the host, like the  
> location
> */ host {
>   location = "unspecified"
> }
>
> /* Feel free to specify as many udp_send_channels as you like.  Gmond
>    used to only support having a single channel */
>
> /* You can specify as many udp_recv_channels as you like as well. */
> udp_recv_channel {
>   port = 8649
> }
>
> /* You can specify as many tcp_accept_channels as you like to share
>    an xml description of the state of the cluster */  
> tcp_accept_channel
> {
>   port = 8649
> }
>
> I started gmond on the compute node bpsh 5 gmond and restarted gmond  
> and
> gmetad. I don't see my node running gmond. ps -elf | grep gmond on the
> compute node returns nothing. I tried to add gmond as a service on the
> compute node with the script at the krazy site  but I get:
>
> scyld:~ root$ bpsh 5 chkconfig --add gmond service gmond does not
> support chkconfig
>
> and
>
> scyld:~ root$ bpsh 5 service gmond start
> /sbin/service: line 3: /etc/init.d/functions: No such file or  
> directory
>
> I am at a loss over what to try next, it seems this should work. Any  
> and
> all suggestions will be appreciated.
>
> Thanks
>
> Mike
>
> Michael Muratet, Ph.D.
> Senior Scientist
> HudsonAlpha Institute for Biotechnology
> mmuratet at hudsonalpha.org
> (256) 327-0473 (p)
> (256) 327-0966 (f)
>
> Room 4005
> 601 Genome Way
> Huntsville, Alabama 35806
>
>
>
>
>
>
>
> ------------------------------
>

Michael Muratet, Ph.D.
Senior Scientist
HudsonAlpha Institute for Biotechnology
mmuratet at hudsonalpha.org
(256) 327-0473 (p)
(256) 327-0966 (f)

Room 4005
601 Genome Way
Huntsville, Alabama 35806








More information about the Beowulf mailing list