<div dir="ltr">Hi Michael,<div><br></div><div>I would recommend trying 'bbcp' before 'hpn-ssh' as the latter will really only benefit you for high-latency links, e.g. across country. </div><div><br></div><div>Put the bbcp binary on both sides and try it out. If you don't have a way to install bbcp into a system $PATH, you can specify the absolute path to the binary. Random link with examples here:</div><div><a href="https://www.nics.tennessee.edu/computing-resources/data-transfer/bbcp">https://www.nics.tennessee.edu/computing-resources/data-transfer/bbcp</a><br></div><div><br></div><div>Regards,</div><div>Alex</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 2, 2020 at 8:32 AM Michael Di Domenico <<a href="mailto:mdidomenico4@gmail.com">mdidomenico4@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">just to further the discussion and for everyone's education i found<br>
this whitepaper, which seems to confirm what i see<br>
<br>
<a href="https://www.intel.com/content/dam/support/us/en/documents/network/sb/fedexcasestudyfinal.pdf" rel="noreferrer" target="_blank">https://www.intel.com/content/dam/support/us/en/documents/network/sb/fedexcasestudyfinal.pdf</a><br>
<br>
maybe hpn-ssh is something i can work into my process<br>
<br>
<br>
On Thu, Jan 2, 2020 at 10:26 AM Michael Di Domenico<br>
<<a href="mailto:mdidomenico4@gmail.com" target="_blank">mdidomenico4@gmail.com</a>> wrote:<br>
><br>
> does anyone know or has anyone gotten rsync to push wire speed<br>
> transfers of big files over 10G links? i'm trying to sync a directory<br>
> with several large files. the data is coming from local disk to a<br>
> lustre filesystem. i'm not using ssh in this case. i have 10G<br>
> ethernet between both machines. both end points have more then<br>
> enough spindles to handle 900MB/sec.<br>
><br>
> i'm using 'rsync -rav --progress --stats -x --inplace<br>
> --compress-level=0 /dir1/ /dir2/' but each file (which is 100's of<br>
> GB's) is getting choked at 100MB/sec<br>
><br>
> running iperf and dd between the client and the lustre hits 900MB/sec,<br>
> so i fully believe this is an rsync limitation.<br>
><br>
> googling around hasn't lent any solid advice, most of the articles are<br>
> people that don't check the network first...<br>
><br>
> with the prevalence of 10G these days, i'm surprised this hasn't come<br>
> up before, or my google-fu really stinks. which doesn't bode well<br>
> given its the first work day of 2020 :(<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>