[Beowulf] Looking for block size settings (from stat) on parallel filesystems

Craig Tierney Craig.Tierney at noaa.gov
Thu Jun 17 13:59:13 PDT 2010

I am looking for a little help to find out what block sizes (as shown
by stat) by Linux based parallel filesystems.

You can find this by running stat on a file.  For example on Lustre:

# stat /lfs0/bigfile 
  File: `/lfs0//bigfile'
  Size: 1073741824	Blocks: 2097160    IO Block: 2097152 regular file
Device: 59924a4a8h/1502839976d	Inode: 45361266    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2010-06-17 20:24:32.000000000 +0000
Modify: 2010-06-17 20:16:49.000000000 +0000
Change: 2010-06-17 20:16:49.000000000 +0000

If anyone can run this test and provide me with the filesystem
and result (as well as the OS used), it would be a big help.  I am 
specifically looking for GPFS results, but other products (Panasas, 
GlusterFS, NetApp GX) would be helpful.

Why do I care?  Because in netcdf, when nf_open or nf_create are
called, it will use the blocksize that is found in the stat structure.  On
lustre it is 2M so writes are very fast.  However, if the number comes
back as 4k (which some filesystems do), then writes are slower than 
they need to be.  This isn't just a  netcdf issue.  The Linux tool cp does 
the same thing, it will use a block  size that matches the specified 
blocksize of the destination filesystem.


More information about the Beowulf mailing list