[Beowulf] automount on high ports

Steffen Grunewald steffen.grunewald at aei.mpg.de
Wed Jul 2 00:01:13 PDT 2008


On Tue, Jul 01, 2008 at 04:21:55PM -0400, Perry E. Metzger wrote:
> 
> Henning Fehrmann <henning.fehrmann at aei.mpg.de> writes:
> >> Thus, your problem sounds rather odd. There is no obvious reason you
> >> should be limited to 360 connections.
> >> 
> >> Perhaps your problem is not what you think it is at all. Could you
> >> explain it in more detail?
> >
> > I guess it has also something to do with the automounter. I am not able
> > to increase this number. 
> > But even if the automounter would handle more we need to be able to 
> > use higher ports: 
> > netstat shows always ports below 1024.
> >
> > tcp        0      0 client:941         server:nfs
> >
> > We need to mount up to 1400 nfs exports.
> 
> All NFS clients are connecting to a single port, not to a different
> port for every NFS export. You do not need 1400 listening TCP ports on
> a server to export 1400 different file systems. Only one port is
> needed, whether you are exporting one file system or one million, just
> as only one SMTP port is needed whether you are receiving mail from
> one client or from one million.

That's true for the server side, but not for the client side. Each client-
server connection uses another (privileged) port *on the client* which is
where the problem shows up.

This particular setup comprises 1400 cluster nodes which all act as
distributed storage. Files would be spread over all of them, and an
application would sequentially access files (time series) which are located
on different servers. (Call it NUSA, non-uniform storage architecture.)

I guess it's time to go ahead and try a real cluster filesystem, or wait
for NFS v4.1 to settle down. 
I understand that with several tens of TB a re-organisation of all data
into a completely new tree would be tricky if not impossible.
OTOH such things like glusterfs allow for building cluster fs's without
moving data - gluster would just add a set of additional layers
("translators") on top of already existing physical fs's.
I have followed glusterfs development for more than a year now, and while 
they are still working on their redundancy features, it should be useable
for "quasi read-only" access. (Note that the underlying fs would be still
accessible, for feeding data in; clients could have r/o access to the
glusterfs namespace.) Version 1.4 is to be out in a couple of days.
See www.gluster.org

BTW, since I'm facing the same issue on a somewhat smaller scale, any other
suggestion is appreciated.

Cheers,
 Steffen (same institute, different location :)

-- 
Steffen Grunewald * MPI Grav.Phys.(AEI) * Am Mühlenberg 1, D-14476 Potsdam
Cluster Admin * http://pandora.aei.mpg.de/merlin/ * http://www.aei.mpg.de/
* e-mail: steffen.grunewald(*)aei.mpg.de * +49-331-567-{fon:7233,fax:7298}
No Word/PPT mails - http://www.gnu.org/philosophy/no-word-attachments.html



More information about the Beowulf mailing list