I'm running bonnie++ on a xlarge instance right now with 30 GB files on
/mnt. I'll post the results when it finishes. I also have Ganglia set
up on the node, so you can check that out until I shut the instance
down:
<br><br><a href="http://ec2-72-44-53-20.compute-1.amazonaws.com/ganglia">http://ec2-72-44-53-20.compute-1.amazonaws.com/ganglia</a><br><br><div class="gmail_quote">On Fri, Mar 7, 2008 at 12:05 PM, Peter Skomoroch <<a href="mailto:peter.skomoroch@gmail.com">peter.skomoroch@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Joe, thanks for the feedback. The bonnie results were not actually mine, I was just pointing to some numbers run by Paul Moen.<div class="Ih2E3d">
<br><br><blockquote style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;" class="gmail_quote">
Your 1GB file data is likely more representative, but with 15 GB ram,<br>
you need to be testing 30-60 GB files.<br></blockquote><br></div>I'll try to tweak the BPS bonnie tests to run some large files...<div><div></div><div class="Wj3C7c"><br><br><br><div class="gmail_quote">On Fri, Mar 7, 2008 at 11:57 AM, Joe Landman <<a href="mailto:landman@scalableinformatics.com" target="_blank">landman@scalableinformatics.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div>Peter Skomoroch wrote:<br>
<br>
> Extra Large Instance:<br>
><br>
> 15 GB memory<br>
> 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)<br>
> 1,690 GB instance storage (4 x 420 GB plus 10 GB root partition)<br>
> 64-bit platform<br>
> I/O Performance: High<br>
<br>
</div>Note: minor criticism, but overall, nice results.<br>
<br>
Looking over your bonnie results is worth a quick comment. Any time you<br>
have bonnie or IOzone (or other IO benchmarks) which are testing file<br>
sizes less than ram size, you are not actually measuring disk IO. This<br>
is cache speed pure and simple. Either page/buffer cache, or RAID<br>
cache, or whatever.<br>
<br>
We have had people tell us to our face that their 2GB file results (on a<br>
16 GB RAM machine) were somehow indicative of real file performance,<br>
when, if they walked over to the units they were testing, they would<br>
have noticed the HD lights simply not blinking ... Yeah, an amusing<br>
beer story (the longer version of it), but a problem none-the-less.<br>
<br>
Your 1GB file data is likely more representative, but with 15 GB ram,<br>
you need to be testing 30-60 GB files.<br>
<br>
Not trying to be a marketing guy here or anything like that ... we test<br>
our JackRabbit units with 80GB to 1.3TB sized files. We see (sustained)<br>
750 MB/s - 1.3 GB/s in these tests. We also note some serious issues<br>
with the linux buffer cache and multiple RAID controllers (buffer cache<br>
appears to serialize access). We do this as we actually want to measure<br>
disk performance, and not buffer cache performance.<br>
<br>
That criticism aside, nice results. It shows what a "cloud" can do.<br>
<div><br>
> Price: $0.80 per instance hour<br>
<br>
<br>
</div>--<br>
Joseph Landman, Ph.D<br>
Founder and CEO<br>
Scalable Informatics LLC,<br>
email: <a href="mailto:landman@scalableinformatics.com" target="_blank">landman@scalableinformatics.com</a><br>
web : <a href="http://www.scalableinformatics.com" target="_blank">http://www.scalableinformatics.com</a><br>
<a href="http://jackrabbit.scalableinformatics.com" target="_blank">http://jackrabbit.scalableinformatics.com</a><br>
phone: +1 734 786 8423<br>
fax : +1 866 888 3112<br>
cell : +1 734 612 4615<br>
</blockquote></div><br><br clear="all"><br></div></div><div><div></div><div class="Wj3C7c">-- <br>Peter N. Skomoroch<br><a href="mailto:peter.skomoroch@gmail.com" target="_blank">peter.skomoroch@gmail.com</a><br><a href="http://www.datawrangling.com" target="_blank">http://www.datawrangling.com</a>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Peter N. Skomoroch<br><a href="mailto:peter.skomoroch@gmail.com">peter.skomoroch@gmail.com</a><br><a href="http://www.datawrangling.com">http://www.datawrangling.com</a>