vcpus, more the merrier?

Here we are, back to the tried and true thought that more vcpus equals better performance.

If you are running a fairly low vm:host ratio, this might not be as big of a deal because your physical cpu to virtual cpu ratio is lower. I recently ran into a vendor that “had” to have 8 cpus. Now this client has the latest and greatest most awesomeness host, some dell r620 2x 8 core (ht enabled) with 256gb of ram at 1600mhz - I mean this host is quick. The problem is there are two of them for the whole cluster. I’ve built it this way on purpose: it’s a basic smb that runs everything (and I do mean everything, virtualized, 99% of it is for dr purposes) and it’s backed by some nice eql arrays. So now that you know the background of the client, it’s time I introduce everyone to some important concepts in virtualization.

Welcome to the world of the vmk scheduler. Its job is to tell the vmk when to run vcpus; it likes to run Symmetric multiprocessing (SMP) vcpus at the same time. It will wait until it can, if smp vcpus aren’t run at the same time and the application is multithreaded, some very bad things will happen (cpu returns instruction sets out of order, blah blah blah). So now that we have a basic understanding of that, let's look and see what some metrics are to see how long it’s taking the scheduler to do its thing…

First let’s discuss the metrics we will be using…
Taking directly from VMware
• Run - Amount of time the virtual machine is consuming CPU resources.
• Wait - Amount of time the virtual machine is waiting for a VMkernel resource.
• Ready - Amount of time the virtual machine was ready to run, waiting in a queue to be scheduled.
• Co-Stop - Amount of time a SMP virtual machine was ready to run, but incurred delay due to co-vCPU scheduling contention.

Esxtop and advanced perf stats to the rescue!!

If you don’t know how to use esxtop, go read elsewhere.
These are the values of this server: (click the image below)

High ready time bad!! 16% of it's time trying to do work and it can't...

Ouch… this server just spent 16.3% of its time waiting to be run… poor thing, sad part is it is using less than 600mhz at this time.
Let's try some things, like only running this one vm on a host, and clearing off all other vms. We would expect ready time to decrease because the scheduler isn’t doing anything because it’s only one vm! It doesn’t even have to cross numa nodes!! (click image below)

As expected, it's low… super low. That’s great - it’ll be able to rock out any time now..
Okay, so what if I change it so everything is running on one host again…
As expected, it’s back to high again (16.5%). Typically over 5% and you’ll notice performance issues, unless you're reading this article, or you’ve been around the VMware block before, you won’t really know how to describe it other than “sluggish."

Stay tuned for part two where we decrease the number of vpcus and watch the efficiency of the vm increase, even though the total amount of work it could do is decreased.

WordPress Themes