Tuesday 30 June 2015

Hyper-V Virtualization 101: Hardware Considerations

When we deploy a Hyper-V virtualization solution we look at:
  1. VM IOPS Requirements
  2. VM vRAM Requirements
  3. VM vCPU Requirements
In that order.

Disk Subsystem

The disk subsystem tends to be the first bottleneck.
For a solution with dual E5-2600 series CPUs and 128GB of RAM requiring 16 VMs or thereabouts we'd be at 16 to 24 10K SAS drives at the minimum for this setup with a 1GB hardware RAID controller (non-volatile or battery backed cache).
RAID 6 is our go-to for array configuration.
Depending on workloads one can look at Intel's DC S3500 series SSDs or the higher endurance DC S3700 series models to get more IOPS out of the disk subsystem.


Keep in mind that the physical RAM is split between the two processors so one needs to be mindful of how the vRAM is divvied up between the VMs.
Too much vRAM on one or two VMs can cause the physical RAM to be juggled between the two physical CPUs (NUMA).
Note that each VM’s vRAM gets a file written to disk. So, if we are allocating 125GB of vRAM to the VMs there will be 125GB of files on disk.


And finally, each vCPU within a VM represents a thread to the physical CPU. For VMs with multiple vCPUs every thread (vCPU) for that VM needs to be processed by the CPU's pipeline in parallel. So, the more vCPUs we assign to a VM the more the CPU's logic needs to juggle the threads to have them processed.
The end result? More vCPUs is not always better.
I have an Experts Exchange article on Some Hyper-V Hardware and Software Best Practices that should be of some assistance too. In it I speak about the need to tweak the BIOS settings on the server, hardware configurations to eliminate single point of failures (SPFs), and more.


In the end, it is up to us to make sure we test out our configurations before we deploy them. Having a high five figure SAN installed to solve certain performance “issues” only to find out they exist _after_ the fact can be a very bad place to be in.
We test all aspects of a standalone and clustered system to discover its strengths and weaknesses. While this can be a very expensive policy, to date we’ve not had one performance issue with our deployments.
Our testing can also be quite beneficial to present an IOPS and throughput reports based on sixteen different allocation sizes (hardware and software) to our client _and_ the vendor complaining about our system. ;)
Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book


Unknown said...

The Experts Exchange article looks fantastic, but without a premium subscription you can't read the whole thing. It only shows up through the 2nd paragraph in the Network Adapters section. :-(

Philip Elder Cluster MVP said...

You can sign up for a freebie 30 day subscription that should give access to the article?