RS: [Beowulf] Virtualization in head node ?

David B. Ritch david.ritch.lists at gmail.com
Wed Sep 16 05:25:36 PDT 2009


At the RedHat Summit a couple of weeks ago, RH said that with a switch
from Xen to KVM and lots of tuning, they were able to get the I/O
overhead down to 5%.  I thought that was pretty impressive.  They also
introduced a new product RedHat Enterprise Virtualization, which is
supposed to support process migration and all the other niceties that
we've come to expect from virtualization.  I haven't played with it yet,
but it sounds quite interesting.

I'd be interested to hear of anyone else's experiences with these.

David

On 9/16/2009 5:34 AM, Tim Cutts wrote:
>
> On 16 Sep 2009, at 8:23 am, Alan Ward wrote:
>
>>
>> I have been working quite a lot with VBox, mostly for server stuff. I
>> agree it can be quite impressive, and has some nice features (e.g. do
>> not stop a machine, sleep it - and wake up pretty fast).
>>
>> On the other hand, we found that anything that has to do with disk
>> access is pretty slow, specially when working with a local disk image
>> file.
>
> I think that's pretty standard for most virtualisation, whichever
> vendor it comes from.  The I/O is fairly sub-optimal.  I've had a fair
> bit of experience now of various VMware flavours.  The I/O performance
> of the desktop versions is fairly shocking; this is presumably largely
> down to the fact that desktops and laptops tend to have fairly slow
> I/O to start with, and the virtualisation penalty is very noticeable.
>
> Our production virtualisation system uses dual-fabric SAN-attached
> storage (EVA5000), ESX 4.0 as the hypervisor, and we're running about
> 20 virtual machines per physical host.  Most of these applications are
> not I/O heavy, but really trivial benchmarking using hdparm indicates
> I/O bandwidth within the VM of about half that if the machine were
> physical.  Very unscientific test, though.  I should do some proper
> testing with bonnie++...
>
> Virtual disk performance in ESX 4.0 definitely feels better than ESX
> 3.5, but that's largely because they've got rid of some fairly serious
> brokenness in memory handling in the hypervisor which was leading to
> unnecessary swapping of the VMs.
>
> ESX 4.0 also has a new guest paravirtual SCSI driver which is supposed
> to improve virtual disk performance by about 20% but I have yet to
> test that.
>
> Tim
>
>



More information about the Beowulf mailing list