FW: [Beowulf] file IO benchmark
Imran Khan
Imran at workstationsuk.co.uk
Thu Nov 24 09:08:16 PST 2005
Joe,
I would have said the same about panasas but I like terragrid as every thing
is pretty much standard so not to many high over heads, plus it offers the
following:
Standard Linux filesystem, and tools, so no re-training
TerraGrid does not use a Meta Data Controller, so scales linearly.
TerraGrid is the only CFS solution with a 24X7 resilient option!
"Terragrid", allows you to start with one brick and expand as you need to.
Increased reliability by support for diskless cluster nodes
Terragrid uses a cache coherent implementation of iSCSI, to make a standard
Linux filesystem behave as a parallel filesystem.
Regards
Imran
-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On
Behalf Of Joe Landman
Sent: 24 November 2005 15:37
To: Toon Knapen
Cc: johnh at streamline-computing.com; beowulf at beowulf.org
Subject: Re: [Beowulf] file IO benchmark
Hi Toon:
A little more than a year ago we use io-bench from HPCC which I did a
quick MPI port with. We were showing 1.8-2.1 GB/s sustained to a
Panasas disk (http://www.panasas.com) system from a cluster. We also
used the oocore (see http://www.nsf.gov/div/index.jsp?org=OCI). Again,
using the Panasas disk, we were about 5x the speed of the nearest SAN
solutions on 32 nodes, and more than an order of magnitude faster at 128
nodes.
If you want a copy of my rather naive MPI port of io-bench, let me
see if we can redistribute it. If you want oocore, surf over to
http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf0605 or grab it
from http://www.nsf.gov/pubs/2006/nsf0605/oocore.tar.gz
BTW: I cannot say enough good things about the Panasas file system.
If you want raw speed to your cluster, there aren't too many things
out there that can give it a run for the money.
Joe
Toon Knapen wrote:
> John Hearns wrote:
>> On Thu, 2005-11-24 at 10:49 +0100, Toon Knapen wrote:
>>
>>
>>> The problem is that our parallel direct out-of-core solver thus needs to
>>> store tmp data on disk. We already encountered problems when people are
>>> using one global NFS mounted filesystem for storing the tmp data of all
>>> nodes in the parallel run.
>> For temporary data you should strongly encourage your users to write to
>> scratch areas locally on the nodes.
>> Or if you are writing the software, configure your software to do that,
>> maybe using an environment variable.
>
> We allow users to specify the scratch-directory to make it point to a
> local disk but this does not guarantee us that it will effectively point
> to a local disk (or performant SAN). I don't see how an env.var. could
> solve that?
>
> t
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list