[Beowulf] glusterfs and openmpi/mpich problems

Gerry Creager gerry.creager at tamu.edu
Thu Jan 8 10:25:55 PST 2009


We've been working with gluster of late, on our high throughput cluster 
(126 nodes, gigabit connected).  We did some tweaking recently, and now, 
my test code, an instance of WRF on 128 cores, just sorta dies.

More specifically, it takes 19 minutes to write the first 403MB file to 
disk, while various tasks are mindlessly using CPU time, but only the 
initial output file appears to get written.

Does anyone have any history with gluster who might be willing to offer 
some help/hints?

Thanks, Gerry



More information about the Beowulf mailing list