Tuesday 10 March 2015

Cluster: Asymmetric or iSCSI SAN Storage Configuration and Performance Considerations

We When we set up a new asymmetric cluster, or if one is using an iSCSI SAN for central storage, the following is a guideline to how we would configure our storage.

Our configuration would be as follows:

  • JBOD or SAN Storage
    • 6TB of available storage
  • (2) Hyper-V Nodes
    • 256GB ECC RAM Each
    • 120GB DC S3500 Series Intel SSD RAID 1 for OS
    • Dual 6Gbsp SAS HBAs (JBOD) or Dual Intel X540T2 10GbE (iSCSI)

There are three key storage components we need to configure.

  1. Cluster Witness (non-CSV)
    • 1.5GB Storage
  2. Common Files (CSV 1)
    • Hyper-V Settings Files
    • VM Memory Files
    • 650GB Storage
  3. Our VHDX CSVs (balance of 5,492.5GB split 50/50)
    • CSV 2 at 2,746.25GB
    • CSV 3 at 2,746.25GB

Given that our two nodes have a sum total 512GB of RAM available to the VMs, though we’d be provisioning a maximum of 254GB of vRAM at best, we would set up our Common Files CSV with 650GB of available storage.

VHDX CSVs

We split up our storage for VHDX files into at least two Storage Spaces/LUNs. Each node would own one of the resulting CSVs.

We do this to split up the I/O between the two nodes. If we had just one 5.5TB CSV then all I/O for that CSV would be processed by just the owner node.

It becomes pretty obvious that having all I/O managed by just one of the nodes may present a bottleneck to overall storage performance. At the least, it leaves one node not carrying a share of the load.

Performance Considerations

Okay, we have our storage configured as above.

Now it’s time to set up our workloads.

  • VM 0: DC
  • VM 2: Exchange 2013
  • VM 3-6: RDHS Farm (Remote Desktop Services)
  • VM 7: SQL
  • VM 8: LoBs Line-of-Business apps), WSUS, File, and Print

Our highest IOPS load would be SQL followed by our two RDSH VMs and then our LoB VM. Exchange likes a lot more RAM than I/O.

When provisioning our VHDX files we would be careful to make sure our high IOPS VMs are distributed between the two CSVs as evenly as possible. This way we avoid sending most of our I/O through one node.

Why 650GB for Common Files?

Even though our VM memory files would take up about 254GB of that available storage one also needs space for the configuration files themselves, though they are quite small in size, and also additional space for those just-in-case moments.

One such moment is when an admin pulls the trigger on a snapshot/checkpoint. By default the differencing disk would be dropped into the Common Files storage location.

One would hope that monitoring software would throw up an alarm letting folks know that their cluster is going to go full-stop when that location runs out of space! But, sometimes that is _not_ the case so we need room to run our needed merge process to get things going again.

How do I know?

Okay, all of the above is just fine and dandy and begs the following question: How do I really know how the cluster will perform?

No one client’s environment is like another. So, we need to make sure we take performance baselines across their various workloads and make sure to talk to LoB vendors about their products and what they need to perform.

We have a standing policy to build out a proof-of-concept system prior to reselling that solution to our clients. As a result of both running baselines with various apps and building out our clusters ahead of time we now have a pretty good idea of what needs to be built into a cluster solution to meet our client’s needs.

That being said, we need to test our configurations thoroughly. Nothing could be worse than setting up a $95K cluster configuration that was promised to outperform the previous solution only to have that solution fall flat on its face. :(

Test. Test. Test. And, test again!

NOTE: We do _not_ deploy iSCSI solutions anywhere in our solution’s matrix. We are a direct attached storage (SAS based DAS) house. However, the configuration principles mentioned above apply for those deploying Hyper-V clusters on iSCSI based storage.

EDIT 2015-03-26: Okay, so fingers were engaged prior to brain on that first word! ;)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

No comments: