help

korsedal at zaiqtech.com korsedal at zaiqtech.com
Sat Dec 1 09:55:50 PST 2001


			Please unsubscribe korsedal at zaiqtech.com

-----Original Message-----
From: beowulf-request at beowulf.org [mailto:beowulf-request at beowulf.org]
Sent: Saturday, December 01, 2001 12:01 PM
To: beowulf at beowulf.org
Subject: Beowulf digest, Vol 1 #670 - 13 msgs


Send Beowulf mailing list submissions to
	beowulf at beowulf.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://www.beowulf.org/mailman/listinfo/beowulf
or, via email, send a message with subject or body 'help' to
	beowulf-request at beowulf.org

You can reach the person managing the list at
	beowulf-admin at beowulf.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Beowulf digest..."


Today's Topics:

   1. Re:Xbox clusters? (W Bauske)
   2. Re:mpi-prog porting from lam -> scyld beowulf mpi difficulties
(Sean Dilda)
   3. MPI I/O + nfs (Bill Broadley)
   4. Re:Xbox clusters? (Tim Wait)
   5. Portland High Performance Fortran pghpf on Scyld cluster (Hans
Schwengeler)
   6. time command defaults changed in RedHat 7.2 vs RedHat 6.2?
(Phillip D. Matz)
   7. Re:MPI I/O + nfs (Rob Latham)
   8. Scyld boot problem (L. Gritsenko)
   9. Re:Xbox clusters? (Velocet)
  10. Re:Xbox clusters? (W Bauske)
  11. Re:MPI I/O + nfs (Ron Chen)
  12. RE:GCC/Fortran 90/95 questions (Ron Chen)
  13. Process zombies in master node (with Scyld) (Carlos J. Garcia
Orellana)

--__--__--

Message: 1
Date: Thu, 29 Nov 2001 15:45:31 -0600
From: "W Bauske" <wsb at paralleldata.com>
Organization: PDS Inc.
CC: beowulf at beowulf.org
Subject: Re: Xbox clusters?
To: beowulf at beowulf.org (rfc822 Compliance issue To: added by system
POTENTIAL SPAM)

Steve Gaudet wrote:
> 
> 
> > They buy from IBM/Compaq/HP or pick your favorite mainstream vendor.
> 
> If you find a Compaq GEM partner(we are), your fall into Government,
> Educational, and Medical category, you can't beat the deals Compaq is
> offering right now.  For New England they have a Evo D500, PIV 1.5Ghz,
845,
> 20Gb, 256mb, WIN2000, CD for $667.00 up to December 12th. Moreover if
its a
> quantity they do even better on the price.
> 

Wonder why medical? That's big business.

I'm in business to make money with clusters so I guess I wouldn't
qualify
for that program. However, I can build an equivalent node for less than
$500. (Skipping the CD and win2k which I have no use for)

d845wnl   $130
P4 1.5ghz $152
Case/PS    $30
20GB disk  $63
256MB dimm $30
AGP card   $20
==============
total     $425

Shipping would be around $35 delivered to your door. All you need is
a screw driver to assemble...

The d845wnl has 10/100 built in and is PXE bootable.

If you like P4 1.9Ghz systems, add $120 and you have a screaming
node for $545. (if you like P4's for your codes)

It's amazing how cheap nodes are now.


Wes

--__--__--

Message: 2
Date: Thu, 29 Nov 2001 18:46:10 -0500
From: Sean Dilda <agrajag at scyld.com>
To: Peter Beerli <beerli at genetics.washington.edu>
Cc: beowulf at beowulf.org
Subject: Re: mpi-prog porting from lam -> scyld beowulf mpi difficulties


--KDt/GgjP6HVcx58l
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, 28 Nov 2001, Peter Beerli wrote:

> (1) if I run "top" why do I see 6 processes running when I start
>     with mpirun -np 3 migrate-n ?=20

Two per node.  For every process your want running, it also runs another
one to take care of the MPI network I/O.   Our MPI is based off of
mpich, and this is how they have it setup.

--KDt/GgjP6HVcx58l
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE8BsjB+pq97aGGtXARAoeTAJ0SiJn+qfA3EPoX+XxeEooDXY8mUQCgpdbs
bP8txC3qs6H0fRX9HxpX/WU=
=0x+f
-----END PGP SIGNATURE-----

--KDt/GgjP6HVcx58l--

--__--__--

Message: 3
Date: Thu, 29 Nov 2001 20:34:02 -0800
From: Bill Broadley <bill at math.ucdavis.edu>
To: beowulf at beowulf.org
Subject: MPI I/O + nfs


I'm trying to get MPICH-1.2.2.3 MPI I/O + nfs working.

I read:
http://www-unix.mcs.anl.gov/mpi/mpich/docs/install/node31.htm 

Step 1:
~/private/io> /usr/sbin/rpcinfo  -p `hostname` | grep nfs 
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs

I'm using clients n1 and n2:
n2:~> mount | grep noac
master:/d0 on /d0 type nfs (rw,nfsvers=3,noac,addr=192.168.0.250)
n1:~> mount | grep noac
master:/d0 on /d0 type nfs (rw,nfsvers=3,noac,addr=192.168.0.250)

Just to make absolutely sure I'm using nfs 3 I ran nfstats, I ran
on n1 and n2 (same result):
Client nfs v2:
null       getattr    setattr    root       lookup     readlink   
0       0% 0       0% 0       0% 0       0% 0       0% 0       0% 
read       wrcache    write      create     remove     rename     
0       0% 0       0% 0       0% 0       0% 0       0% 0       0% 
link       symlink    mkdir      rmdir      readdir    fsstat     
0       0% 0       0% 0       0% 0       0% 0       0% 0       0% 

Client nfs v3:
null       getattr    setattr    lookup     access     readlink   
0       0% 222540 54% 83      0% 10010   2% 52      0% 53      0% 
read       write      create     mkdir      symlink    mknod      
67772  16% 103571 25% 2070    0% 2       0% 0       0% 0       0% 
remove     rmdir      rename     link       readdir    readdirplus
2068    0% 2       0% 0       0% 0       0% 172     0% 0       0% 
fsstat     fsinfo     pathconf   commit     
356     0% 356     0% 0       0% 1372    0% 

When running a very simple MPI I/O example I stil get:

File locking failed in ADIOI_Set_lock. If the file system is NFS, you
need to use NFS version 3 and mount the directory with the 'noac' option
(no attribute caching).

Anyone have any ideas?  Anyone know of an MPICH mailing list?

Additional info:
n1:~> uname -a 
Linux n1 2.4.9 #5 SMP Wed Sep 26 19:59:17 GMT-7 2001 i686 unknown
n2:~> uname -a
Linux n2 2.4.9 #5 SMP Wed Sep 26 19:59:17 GMT-7 2001 i686 unknown



-- 
Bill Broadley
Mathematics/Institute of Theoretical Dynamics
UC Davis

--__--__--

Message: 4
Date: Wed, 28 Nov 2001 15:07:47 -0500
From: Tim Wait <TIMOTHY.R.WAIT at saic.com>
To: beowulf at beowulf.org
Subject: Re: Xbox clusters?

> So, the question is, with these numbers, how do people end up spending
> $250K on 40 or even 60-CPU clusters?
> 

Um, high speed interconnect at $1500/box, quality components,
 >=512 MB per proc, rackmounts, big h/w raid storage, A/C...

tim


--__--__--

Message: 5
Date: Thu, 29 Nov 2001 14:24:30 +0100
From: Hans Schwengeler <schweng at master2.astro.unibas.ch>
To: beowulf at beowulf.org
Subject: Portland High Performance Fortran pghpf on Scyld cluster

Hello,

	I want to use pghpf on our new Scyld cluster (b27-8). pgf77 and
pgf90
work ok, but pghpf appears to hang during execution of the resulting
program.
First trial was to point /usr/local/mpi/lib to /usr/lib/, second try
was building mpich-1.2.1 (from the Scyld ftp site after applying the
patches).
Both have the result that f77 and f90 work, but NOT pghpf.
I also tried the advice from the pgi FAQ and replaced mpi.o in
/usr/local/pgi/linux86/lib/libpghpf_mpi.a but to no avail.
Test program is
/home/schweng/util/mpich-1.2.1-6.6.beo/mpich-1.2.1/installtest/pi3.f.
/usr/local/bin/mpirun -np 2 pi3
 Process             0  of             2  is alive
Enter the number of intervals: (0 quits)
<-- here it hangs, i.e. Process 1 comes never to live.


Yours, Hans Schwengeler.

--__--__--

Message: 6
From: "Phillip D. Matz" <matz at wsunix.wsu.edu>
To: <beowulf at beowulf.org>
Subject: time command defaults changed in RedHat 7.2 vs RedHat 6.2?
Date: Fri, 30 Nov 2001 10:59:14 -0800

I am used to keeping track of the actual time (elapsed) a job takes to
complete on my cluster with the command line option "time" in RedHat
6.2.

Recently I reinstalled RedHat 7.2 and now the "time" command yields
different results (as if the portable option "-p" is always on).

The man pages only help to tell me why the output looks the way it does,
but
doesn't tell me how to change the default back to what it looks like in
a
6.2 installation.

Does anyone know which file I need to modify to make the time command
report
the total elapsed time and not have the output be in the portable
format?

Thanks!

Phil Matz


--__--__--

Message: 7
Date: Fri, 30 Nov 2001 14:22:49 -0500
From: Rob Latham <rlatham at plogic.com>
To: Bill Broadley <bill at math.ucdavis.edu>
Cc: beowulf at beowulf.org
Subject: Re: MPI I/O + nfs

On Thu, Nov 29, 2001 at 08:34:02PM -0800, Bill Broadley wrote:
> 
> I'm trying to get MPICH-1.2.2.3 MPI I/O + nfs working.

If you want ROMIO ( MPI I/O ), i strongly suggest using pvfs as the
"back end" for your file system.   In the few cases i know of where a
customer used nfs as the back end, performance was downright poor ( as
should be expected when you have to turn off all the caching ).

start here: http://parlweb.parl.clemson.edu/pvfs/index.html

==rob

-- 
[ Rob Latham <rlatham at plogic.com>         Developer, Admin, Alchemist ]
[ Paralogic Inc. - www.plogic.com                                     ]
[                                                                     ]
[ EAE8 DE90 85BB 526F 3181                   1FCF 51C4 B6CB 08CC 0897 ]

--__--__--

Message: 8
Date: Fri, 30 Nov 2001 09:31:07 -0800 (PST)
From: "L. Gritsenko" <lmeerkat at yahoo.com>
Subject: Scyld boot problem 
To: beowulf at beowulf.org

Maybe this will be helpful:
http://www.beowulf.org/pipermail/beowulf/2001-August/001057.html

=====


__________________________________________________
Do You Yahoo!?
Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
http://geocities.yahoo.com/ps/info1

--__--__--

Message: 9
Date: Fri, 30 Nov 2001 17:37:53 -0500
From: Velocet <math at velocet.ca>
To: W Bauske <wsb at paralleldata.com>
Cc: beowulf at beowulf.org
Subject: Re: Xbox clusters?

On Thu, Nov 29, 2001 at 03:45:31PM -0600, W Bauske's all...
> Steve Gaudet wrote:
> > 
> > 
> > > They buy from IBM/Compaq/HP or pick your favorite mainstream
vendor.
> > 
> > If you find a Compaq GEM partner(we are), your fall into Government,
> > Educational, and Medical category, you can't beat the deals Compaq
is
> > offering right now.  For New England they have a Evo D500, PIV
1.5Ghz, 845,
> > 20Gb, 256mb, WIN2000, CD for $667.00 up to December 12th. Moreover
if its a
> > quantity they do even better on the price.
> > 
> 
> Wonder why medical? That's big business.
> 
> I'm in business to make money with clusters so I guess I wouldn't
qualify
> for that program. However, I can build an equivalent node for less
than
> $500. (Skipping the CD and win2k which I have no use for)
> 
> d845wnl   $130
> P4 1.5ghz $152
> Case/PS    $30
> 20GB disk  $63
> 256MB dimm $30
> AGP card   $20
> ==============
> total     $425
> 
> Shipping would be around $35 delivered to your door. All you need is
> a screw driver to assemble...
> 
> The d845wnl has 10/100 built in and is PXE bootable.

Any athlon boards with new chipsets that are PXE bootable?

The PcChips M817 MLR has that, but its not a great board, and old
chipset.

/kc

--__--__--

Message: 10
Date: Fri, 30 Nov 2001 17:47:14 -0600
From: "W Bauske" <wsb at paralleldata.com>
Organization: PDS Inc.
CC: beowulf at beowulf.org
Subject: Re: Xbox clusters?
To: beowulf at beowulf.org (rfc822 Compliance issue To: added by system
POTENTIAL SPAM)


I PXE boot my tiger MP's (s2460) with Intel pro/100 pci adapters.
Adapters go for about $27 which I thought was fair to allow
me to boot/install without a floppy or CD. The floppy and CD
combined are more than that typically.

The boards I've used that have built-in Enet for Athlon have
used some sort of Netware boot capability which I know nothing
about. (K7S5A I think)

Wes

Velocet wrote:
> 
> On Thu, Nov 29, 2001 at 03:45:31PM -0600, W Bauske's all...
> > Steve Gaudet wrote:
> > >
> > >
> > > > They buy from IBM/Compaq/HP or pick your favorite mainstream
vendor.
> > >
> > > If you find a Compaq GEM partner(we are), your fall into
Government,
> > > Educational, and Medical category, you can't beat the deals Compaq
is
> > > offering right now.  For New England they have a Evo D500, PIV
1.5Ghz, 845,
> > > 20Gb, 256mb, WIN2000, CD for $667.00 up to December 12th. Moreover
if its a
> > > quantity they do even better on the price.
> > >
> >
> > Wonder why medical? That's big business.
> >
> > I'm in business to make money with clusters so I guess I wouldn't
qualify
> > for that program. However, I can build an equivalent node for less
than
> > $500. (Skipping the CD and win2k which I have no use for)
> >
> > d845wnl   $130
> > P4 1.5ghz $152
> > Case/PS    $30
> > 20GB disk  $63
> > 256MB dimm $30
> > AGP card   $20
> > ==============
> > total     $425
> >
> > Shipping would be around $35 delivered to your door. All you need is
> > a screw driver to assemble...
> >
> > The d845wnl has 10/100 built in and is PXE bootable.
> 
> Any athlon boards with new chipsets that are PXE bootable?
> 
> The PcChips M817 MLR has that, but its not a great board, and old
chipset.
> 
> /kc
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf

--__--__--

Message: 11
Date: Fri, 30 Nov 2001 19:41:47 -0800 (PST)
From: Ron Chen <ron_chen_123 at yahoo.com>
Subject: Re: MPI I/O + nfs
To: Bill Broadley <bill at math.ucdavis.edu>, beowulf at beowulf.org

There is no MPICH mailing-list. You can email the
MPICH developers directly.

On the other hand, you may check the LAM MPI mailing
list, may be they have encountered similar problems
before:

http://www.lam-mpi.org/mailman/listinfo.cgi/lam-announce

 -Ron


--- Bill Broadley <bill at math.ucdavis.edu> wrote:
> Anyone have any ideas?  Anyone know of an MPICH
> mailing list?


__________________________________________________
Do You Yahoo!?
Buy the perfect holiday gifts at Yahoo! Shopping.
http://shopping.yahoo.com

--__--__--

Message: 12
Date: Fri, 30 Nov 2001 19:55:22 -0800 (PST)
From: Ron Chen <ron_chen_123 at yahoo.com>
Subject: RE: GCC/Fortran 90/95 questions
To: aslam at lanl.gov, gcc at gnu.org, Beowulf <beowulf at beowulf.org>
Cc: open64-devel at lists.sourceforge.net


> 2) Does gcc support f90 or f95?  If not is there any

> GNU compiler that does, are any expected to be in 
> the future?

There is a compiler called open64, which is SGI's
compiler for IA64. They have a C front-end, which is
based on gcc, and they have another for f90. (I don't
know the details...)

Recently, they have ported the f90 front-end and
run-time to other compiler back-ends. Please read the
note below for details.

http://open64.sourceforge.net/

http://sourceforge.net/tracker/?group_id=34861&atid=413342

 -Ron

===========================================================
Porting open64 F90 front-end to Solaris 
This patch ports the open64 Fortran90 compiler front 
end to sparc_solaris platform. Specifically, it ports 
these three executable programs: "mfef90", "ir_tools",

and "whirl2f". ANY OTHER COMPONENT OF OPEN64 IS NOT IN

THE SCOPE OF THIS PATCH.

Tested platforms include  sparc_solaris, mips_irix and
ia32_linux, using both GNU  gcc and vendor compiler.
Makefiles, some header files  and some c/c++ source
files were modified for porting.  



__________________________________________________
Do You Yahoo!?
Buy the perfect holiday gifts at Yahoo! Shopping.
http://shopping.yahoo.com

--__--__--

Message: 13
From: "Carlos J. Garcia Orellana" <carlos at nernet.unex.es>
To: <beowulf at beowulf.org>
Subject: Process zombies in master node (with Scyld)
Date: Sat, 1 Dec 2001 11:10:00 +0100

This is a multi-part message in MIME format.

------=_NextPart_000_0025_01C17A58.B7EEDD40
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hello,

We have a problem with bproc and zombies processes. We are using a =
genetic algorithm application MPI based.
In our aplication, we are executing a external program via system() =
call.
The problem is that many of these new processes are zombies in the =
master node, however, they don=B4t appear as zombie in the slaves nodes.
We are using Scyld 27z-8 with updates.
What can we do to solve this problem?

Thanks.

Carlos

------=_NextPart_000_0025_01C17A58.B7EEDD40
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type content=3D"text/html; =
charset=3Diso-8859-1">
<META content=3D"MSHTML 5.50.4616.200" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Hello,</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>We have a problem with bproc and =
zombies processes.=20
We are using a genetic algorithm application MPI based.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>In our aplication, we are executing a =
external=20
program via system() call.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>The problem is that many of these new =
processes are=20
zombies in the master node, however, they don=B4t appear as zombie in =
the slaves=20
nodes.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>We are using Scyld 27z-8 with =
updates.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2>What can we do to solve this =
problem?</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Thanks.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Carlos</FONT></DIV></BODY></HTML>

------=_NextPart_000_0025_01C17A58.B7EEDD40--



--__--__--

_______________________________________________
Beowulf mailing list
Beowulf at beowulf.org
http://www.beowulf.org/mailman/listinfo/beowulf


End of Beowulf Digest




More information about the Beowulf mailing list