<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Glen,</div><div><br></div><div>I have had great success with the *right* 10GbE nic and NFS. The important things to consider are:</div><div><br></div><div>How much bandwidth will your backend storage provide? 2 x FC 4 I'm guessing best case is 600Mb but likely less.</div><div>What access patterns do the "typical apps" have? </div><div>All nodes read from a single file (no prob for NFS, and fscache may help even more) </div><div>All nodes write to a single file (NFS may need some help or may be too slow when tuned for this)</div><div>All nodes read and write to separate files (NFS is fine if the files aren't too big for the OS to cache reasonably).</div><div><br></div><div>The number of IO servers really is a function of how much disk throughput you have on the backend, frontend, and through the kernel/filesystem goo. My experience is a 10GbE nic from Myricom can easily sustain 500-700MB/s if the storage behind it can and the access patterns aren't evil. Other nics from large and small vendors can fall apart at 3-4 Gb so be careful and test the network first before assuming your FS is the troublemaker. There are cheap switches with 2 or 4 10GbE CX4 connectors that make this much simpler and safer with or without the Parallel FS options.</div><div><br></div><div>Depending on how big/small and how "scratch" the need is... a big tmpfs/ramdisk can be fun :)</div><div><br></div><div>Good luck!</div><div>Greg</div><div><br></div><div><br></div><br><div><div>On Sep 25, 2008, at 9:01 AM, <a href="mailto:beowulf-request@beowulf.org">beowulf-request@beowulf.org</a> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0; ">Date: Thu, 25 Sep 2008 09:40:54 -0400<br>From: Glen Beane <<a href="mailto:Glen.Beane@jax.org">Glen.Beane@jax.org</a>><br>Subject: [Beowulf] scratch File system for small cluster<br>To: "<a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>" <<a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>><br>Message-ID: <<a href="mailto:C5010D26.184D%glen.beane@jax.org">C5010D26.184D%glen.beane@jax.org</a>><br>Content-Type: text/plain; charset="iso-8859-1"<br><br>I am considering adding a small parallel file system ~(5-10TB) my small<br>cluster (~32 2x dual core Opteron nodes) that is used mostly by a handful of<br>regular users. Currently the only storage accessible to all nodes is home<br>directory space which is provided by the Lab's IT department (this is a SAN<br>volume connected to the head node by 2x FC links, and NFS exported to the<br>compute nodes). I don't have to "worry" about the IT provided SAN space -<br>they back it up, provide redundant hardware, etc. The parallel file system<br>would be scratch space (and not backed up by IT). We have a mix of home<br>grown apps doing a pretty wide range of things (some do a lot of I/O, others<br>don't), and things like BLAST and BLAT.<br><br>Can anyone out there provide recommendations for a good solution for fast<br>scratch space for a cluster of this size?<br><br>Right now I was thinking about PVFS2. How many I/O servers should I have,<br>and how many cores and RAM per I/O server?<br>Are there other recommendations for fast scratch space (it doesn't have to<br>be a parallel file system, something with less hardware would be nice)<br><br>--<br>Glen L. Beane<br>Software Engineer<br>The Jackson Laboratory<br><a href="http://www.jax.org">http://www.jax.org</a></span></blockquote></div><br></body></html>