Thursday, 5 June 2014

Cluster Starter Labs: Hyper-V, Storage Spaces, and Scale-Out File Server

The following are a few ways to go about setting up a lab environment to test out various Hyper-V and Scale-Out File Server Clusters that utilize Storage Spaces to tie in the storage.

Asymmetric Hyper-V Cluster

  • (2) Hyper-V Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD (must support SES-3)
In the above configuration we set up the node OS Roles and then enable Cluster. Once cluster is enabled we can import our not initialized shared storage into Cluster Disks and them move them over to Cluster Shared Volumes.
In this scenario one should split the storage up three ways.
  1. 1GB-2GB for Witness Disk
  2. 49.9% CSV 0
  3. 49.9% CSV 1
Once the virtual disks have been set up in Storage Spaces we run the quorum configuration wizard to set the witness disk up.
We use two CSVs in this setup so as to assign 50% of the available storage to each node. This shares the I/O load. Keep this in mind when looking to deploy this type of cluster into a client setting as well as the need to make sure all paths between the nodes and the disks are redundant (dual SAS HBAs and a dual expander/controller JBOD).

Symmetric Hyper-V Cluster with Scale-Out File Services

  • (2) Scale-Out File Server Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD
  • (2) Hyper-V Nodes
For this particular set up we configure our two storage nodes in a SOFS cluster and utilize Storage Spaces to deliver our shares for Hyper-V to access. We will have a witness share for the Hyper-V cluster and then at least one file share for our VHDX files depending on how our storage is set up.

Lab Hardware

The HP MicroServer would be one option for server nodes. Dell C1100 1U off-lease servers can be found on eBay for a song. Intel RS25GB008 or LSI 6Gb SAS Host Bus Adapters (HBAs) are also easily found.
For the JBOD one needs to make sure the unit supports the full compliment of SAS commands being passed through to the disks. To run with cluster two SAS ports that access all of the storage installed in the drive bays is mandatory.
The Intel JBOD2224S2DP (WSC SS Site) is an excellent unit to work with that compares feature wise with DataON, Quanta, and the Dell JBODs now on the Windows Server Catalogue Storage Spaces List.
Some HGST UltraStar 100GB and 200GB SAS SSDs (SSD400 A and B Series) can be had via eBay every once in a while for SSD Tier and SSD Cache testing in Storage Spaces. We are running with the HGST product because it is a collaborative effort between Intel and HGST.

Storage Testing

For storage in the lab it is preferred to have at least 6 of the drives one would be using in production. With six drives we can run the following tests:
  • Single Drive IOPS and Throughput tests
    • Storage Spaces Simple
  • Dual Drive IOPS and Throughput tests
    • Storage Spaces Simple and Two-Way Mirror
  • Three Drive IOPS and Throughput tests
    • Storage Spaces Simple, Two-Way Mirror, and Three-Way Mirror
  • ETC to 6 drives+
There are a number of factors involved in storage testing. The main thing is to establish a baseline performance metric based on a single drive of each type.
A really good, and in-depth, read on Storage Spaces performance:
And, the Microsoft Word document outlining the setup and the Iometer settings Microsoft used to achieve their impressive 1M IOPS Storage Spaces performance:
Our previous blog post on a lab setup with a few suggested hardware pieces:
Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

2 comments:

Anonymous said...

The term "asymmetrical" confuses me in this context. I don't see how it applies at all.
Symmetry implies balance or lack thereof, and as such needs two or more parts.
Is this referring to multiple dissimilar Hyper-V hosts?
-- KW

Philip Elder Cluster MVP said...

KW,

It applies because the total storage to be delivered to Hyper-V gets split up into two halves with each node having ownership.

A full SOFS cluster allows for all SOFS nodes to participate in delivering the VHDX files via SMB Multi-Channel.

Philip