[Beowulf] network transfer issue to disk, old versus new hardware

David Mathog mathog at caltech.edu
Sat Jun 2 17:39:46 PDT 2007


I can't quite wrap my head around a recent nettee result, perhaps
one of the network gurus here can explain it.

The tests were these:

A.  Sustained write to disk:

    sync; accudate; dd if=/dev/zero bs=512 count=1000000 of=test.dat; \
    sync; accudate

    (accudate is a little utility of mine which is like date but
     gives times to milliseconds.  Subtract the times and calculate
     sustained write rate to disk.)

B.  transfer of 512Mb one node to another:

    first node:  
       dd if=/dev/zero bs=512 count=1000000 | \
       nettee -in - -next secondnode -v 63
    second node: 
       nettee -out test.dat

C.  Same as B, but buffer nettee output
    second node:
       nettee -out - | mbuffer -m 4000000 >test.dat

D.  Calculate transfer rate if read from network and write
    to disk are strictly sequential (alternating read, write)= 

     1/(1/11.7 + 1/(speed from A))

E.  Ratio: Observed (B) / expected (D)

F.  Pipe speed (lowest of 5 consecutive tests, it varies a lot,
    probably because of other activity on the nodes, even though they
    were quiescent, highest was around 970Mb/s for both platforms)
    dd if=/dev/zero bs=512 count=1000000 >/dev/null

G.  Raw network speed (move the data, then throw it out)

     first node:
       dd if=/dev/zero bs=512 count=1000000 | \
       nettee -in - -next secondnode -v 63
     second node: 
       nettee -out /dev/null

This was carried out on two different sets of hardware, both with
100BaseT networks (different switches though):

Old:  Athlon MP 2200+, Tyan S2466MPX mobo, 2.6.19.3 kernel, 512Mb RAM
New:  Athlon64 3700+ CPU, ASUS A8N5X mobo, 2.6.21.1 kernel, 1G RAM

Here are the results, all in Megabytes/sec

     OLD    NEW
A    17      40
B     7.4    10.47
C     7.4    11.43
D     6.9     9.05
E     1.07    1.16
F   743     603
G    11.77   11.71

Start with G, in both cases the hardware could push data across
the network at almost exactly the same speed.  From A we see that
the disks on the older machines are considerably slower than the ones
on the newer machines (hdparm showed the same values for OLD/NEW, so
it isn't an obvious misconfiguration). From D we expect OLD to be
slower than NEW, and B shows that that is indeed
the case.  It's a little better than pure sequential because there's
some parallelism in the read part of the network transfer, giving
ratios greater than 1 (E).  There's plenty of pipe bandwidth (F).
Yet when we put mbuffer in (C) there is no speed up AT ALL on OLD,
and a nice one (as expected) on NEW. 

Everything is as it should be for NEW, but why isn't mbuffer
doing it's thing on the OLD machines?

Thanks,

David Mathog
mathog at caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech



More information about the Beowulf mailing list