2008/6/14 Perry E. Metzger <<a href="mailto:perry@piermont.com">perry@piermont.com</a>>:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
A number of these seem rather odd, or unrelated to performance.<br>
<div class="Ih2E3d"><br>
Walid <<a href="mailto:walid.shaari@gmail.com">walid.shaari@gmail.com</a>> writes:<br>
> It is lame, however i managed to get the following kernel paramter to scale<br>
> well in terms of both performance per node, and scalability over a high<br>
> bandwidth low latency network<br>
><br>
> net.ipv4.tcp_workaround_signed_windows = 1<br>
<br>
</div>This is a workaround for a buggy remote TCP. If you have a homogeneous<br>
network of linux boxes, it will have no effect.</blockquote><div><br>True, however one of the assumption we have is that the NFS Filer (runing some modifioed version of BSD )is broken<br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<div class="Ih2E3d"><br>
> net.ipv4.tcp_congestion_control = vegas<br>
<br>
</div>I'm under the impression that the Vegas congestion control policy is<br>
not well loved by the experts on TCP performance.</blockquote><div><br>we were working on the assumption that there is congestion involved, and tried the several different algorithms involved, we did even try veno (that is mainly for wirless) just for the sake of testing, and vegas seemed to give us the boost in perfomance. my basic humble research did not show much when it comes to low latency, high bandwidth LAN networks, let me know if you do have any pointers, URLS that are against vegas or recomend otherwise, also i can see that most of these algorthims have options, that i just used thier defaults, not sure if this is the correct way to go about it?!<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
<div class="Ih2E3d">> net.ipv4.route.max_size = 8388608<br>
<br>
</div>This sets the size of the routing cache. You've set it to a rather<br>
large and fairly random number.</blockquote><div><br>that's the value in RHEL4U6 system that is working on the same network setup, just made a diff between the RHEL4 and RHEL5 and checked what values are different, and worked by trial and error from thier.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d">
> net.ipv4.icmp_ignore_bogus_error_responses = 0<br>
> net.ipv4.icmp_echo_ignore_broadcasts = 0</div></blockquote><div> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Why would paying attention to bogus ICMPs and to ICMP broadcasts help<br>
performance? Neither should be prevalent enough to make any<br>
difference, and one would naively expect performance to be improved by<br>
ignoring such things, not by paying attention to them...<br>
<div class="Ih2E3d"></div></blockquote><div><br>agree, have to test them in isloation, as i said i was working out from trial and error based on the assumption that RHEL4 was working fine in the current environment<br><br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d"><br>
> net.ipv4.tcp_max_orphans = 262144<br>
<br>
</div>I'm not clear on why this would help unless you were expecting really<br>
massive numbers of unattached sockets -- also you're saying that up to<br>
16M of kernel memory can be used for this purpose...</blockquote><div><br>removed, this was again from RHEL4, the default in RHEL5 is actually less <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">
> net.core.netdev_max_backlog = 2000<br>
<br></div>This implies your processes are going to get massive numbers of TCP<br>
connections per unit time. Are they? It is certainly not a *general*<br>
performance improvement...</blockquote><div><br>this is what looks like happening actually, and so far that's the one paramter that looks like making it scale better, however i am going to investigate this further, and can share the information back to you, no respone yet from RH.<br>
<br>regards<br><br>Walid</div></div><br>