[Beowulf] RHEL5 network throughput/scalability
Walid
walid.shaari at gmail.com
Sat Jun 14 22:32:29 PDT 2008
2008/6/14 Perry E. Metzger <perry at piermont.com>:
>
> A number of these seem rather odd, or unrelated to performance.
>
> Walid <walid.shaari at gmail.com> writes:
> > It is lame, however i managed to get the following kernel paramter to
> scale
> > well in terms of both performance per node, and scalability over a high
> > bandwidth low latency network
> >
> > net.ipv4.tcp_workaround_signed_windows = 1
>
> This is a workaround for a buggy remote TCP. If you have a homogeneous
> network of linux boxes, it will have no effect.
True, however one of the assumption we have is that the NFS Filer
(runing some modifioed version of BSD )is broken
>
> > net.ipv4.tcp_congestion_control = vegas
>
> I'm under the impression that the Vegas congestion control policy is
> not well loved by the experts on TCP performance.
we were working on the assumption that there is congestion involved, and
tried the several different algorithms involved, we did even try veno (that
is mainly for wirless) just for the sake of testing, and vegas seemed to
give us the boost in perfomance. my basic humble research did not show much
when it comes to low latency, high bandwidth LAN networks, let me know if
you do have any pointers, URLS that are against vegas or recomend otherwise,
also i can see that most of these algorthims have options, that i just used
thier defaults, not sure if this is the correct way to go about it?!
>
> > net.ipv4.route.max_size = 8388608
>
> This sets the size of the routing cache. You've set it to a rather
> large and fairly random number.
that's the value in RHEL4U6 system that is working on the same network
setup, just made a diff between the RHEL4 and RHEL5 and checked what values
are different, and worked by trial and error from thier.
> > net.ipv4.icmp_ignore_bogus_error_responses = 0
> > net.ipv4.icmp_echo_ignore_broadcasts = 0
>
> Why would paying attention to bogus ICMPs and to ICMP broadcasts help
> performance? Neither should be prevalent enough to make any
> difference, and one would naively expect performance to be improved by
> ignoring such things, not by paying attention to them...
>
agree, have to test them in isloation, as i said i was working out from
trial and error based on the assumption that RHEL4 was working fine in the
current environment
> > net.ipv4.tcp_max_orphans = 262144
>
> I'm not clear on why this would help unless you were expecting really
> massive numbers of unattached sockets -- also you're saying that up to
> 16M of kernel memory can be used for this purpose...
removed, this was again from RHEL4, the default in RHEL5 is actually less
> > net.core.netdev_max_backlog = 2000
>
> This implies your processes are going to get massive numbers of TCP
> connections per unit time. Are they? It is certainly not a *general*
> performance improvement...
this is what looks like happening actually, and so far that's the one
paramter that looks like making it scale better, however i am going to
investigate this further, and can share the information back to you, no
respone yet from RH.
regards
Walid
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080615/3ab3dc92/attachment.html>
More information about the Beowulf
mailing list