[Beowulf] CLuster - Mpich - tstmachines - Heeelp !!!!!!!!
Ernesto Gamez
ernestogamez at gmail.com
Mon Jul 10 18:00:23 PDT 2006
hi, Reuti,,, and all,,
Thanks for help me,
Well, I just in my nodes and master node wrote: "ssh-keygen -t rsa" for get
the public key and later I configured this, I open the file rc.conf to write
sshd_enable="YES".
I can log to all nodes, only with: ssh <ip address node>, but I have
problems with the tstmachines, then I open the file, inetd.conf to enable
the lines:
shell stream tcp nowait root
/usr/libexec/rshd rshd -n -l -4
login stream tcp nowait root
/usr/libexec/rlogind rlogin
later I open the files hosts and hosts.equiv to wrote the IP address, for
all my nodes,
and I created the file .rhosts in home, and I wrote the same Ip's, I change
the line the "permitrootlogin yes" in the file, /etc/ssh/sshd_config
For the MPICH, I Installed based on this page i have the version 1.2.7
http://blizzard.rwic.und.edu/~nordlie/miniwulf/
--------------------------------------------------------------------------------------------
*Installing MPICH:
* Now that the nodes are synched and talking to each other in a trusting
manner, it's time to actually install some message passing software. The
first package I installed was MPICH. I uncompressed and untared the package,
then ran the configure script with the prefix option to tell it where I
wanted the package installed:
./configure --prefix=/usr/local/mpich-1.2.4
The configure script does various things while building the makefile,
including testing the ssh and rsh capabilities of the master node. This is
why that must be running before installing MPICH, and also why you need to
be able to ssh from the master to itself without passwords. After configure
runs, it's time to run 'make' to actually build the package. Finally 'make
install' (run as root) puts the package in its' final location. You then
need to tell MPICH what machines are available to run processes on. This is
accomplished by editing the machines.(os) file, in my case:
/usr/local/mpich-1.2.4/share/machines.freebsd. MPICH puts five copies of the
name of the master node in this file. Change it to a listing of all the
nodes, one per line (in this case, master, alpha, and bravo).
Now it's time to test the cluster to see if the nodes can talk to each other
via MPI. Run the tstmachines script in the sbin/ directory under the mpich
directory to verify this. It will help to use the -v option to get more
info. If this works, it's time to run a program on the cluster.
------------------------------------------------------------------------------------------------
I'm not compiled nothing this,
"export P4_RSHCOMMAND=rsh"
Now, I'm reading about this,...
/usr/local/mpich-1.2.7/bin
# export P4_RSHCOMMAND rsh
export:command not found
Itry this line
# setenv P4_RSHCOMMAND rsh
and is ok, but I have the same problems with the tstmachines...
what do you think, ??
please,,,
2006/7/7, Reuti <reuti at staff.uni-marburg.de>:
>
> Hi,
>
> which rsh-command did you compile into the MPICH? The default ssh
> will also need a passwordless login via ssh for each user. What you
> can try:
>
> export P4_RSHCOMMAND=rsh
>
> to set it to the default rsh-login which you set up already.
>
> HTH - Reuti
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20060710/9f3d4183/attachment.html>
More information about the Beowulf
mailing list