Tuesday, 29 September 2015

A 2 Node Hyper-V Cluster with Clustered Storage Spaces

We are in the process of finishing up a client’s migration from clustered SBS 2011 Standard to our SBS (Small Business Solution) stack solution and the following cluster configuration: image The above setup is as follows in order of appearance top to bottom:
  • 1U Intel Xeon E3 series server running as PDCe, ISO storage, and other non-essential roles
  • 1U single socket Intel Xeon E5 series Hyper-V node
    • On-Board i350T4 plus add-in i350T4
    • Dual 6Gbps Intel/LSI SAS HBAs
  • 1U single socket Intel Xeon E5 series Hyper-V node
    • On-Board i350T4 plus add-in i350T4
    • Dual 6Gbps Intel /LSI SAS HBAs
  • 2U DataON DNS-1640d JBOD
    • Connected via dual 6Gbps SAS cables per node
Operating systems across the board for all physical and virtual servers is Windows Server 2012 R2.
Storage sharing and arbitration is handled by clustered Storage Spaces. The above setup has 1.2TB 10K HGST SAS drives (DataON HCL approved) set up in a Storage Spaces 3-way mirror with the standard Space having 2 columns.
The client we deployed this cluster into had a cluster already in place based on Windows Server 2008 R2. They are all of 15-18 seats and value the uptime insurance a cluster gives them as downtime for them is expensive.
Note that the cost of this particular setup based on Intel Server Systems and the DataON JBOD is very reasonable.
Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book


8 comments:

  1. So you're running the physical nodes configured as a Scale Out File Server and then on top of that Hyper-V which stores the VMs om the SOFS? Just as you would do if you had 4 nodes (2 dedicates HV, 2 dedicated SOFS) but consolidate it to save space and cost?

    ReplyDelete
  2. No SOFS as the VMs are not accessed via HA share.

    Clustered Storage Spaces provides the logic that would otherwise be in a storage shelf.

    ReplyDelete
  3. Can something like this be done without the JBOD in more of a "shared nothing" setup? In other words could we deploy a couple of 2U Servers with local storage and "mirror" the internal storage between the 2 nodes (over 10GbE) with clustered storage spaces?

    StarWind has what they refer to as a "Hyper Converged Platform". Basically Clustered Hyper-V on a "Virtual SAN" (or "Software Defined Storage") that leverages the internal storage between nodes without the JBOD requirement...
    https://www.starwindsoftware.com/starwind-hyper-converged-platform

    I'm wondering if we can accomplish something similar in Native Windows with Clustered Storage Spaces and/or if there is some reason we would not want to do so.

    Having VMs always access local storage that is mirrored across the network as opposed to always accessing storage across the network seems more efficient but I'm not an expert on all of this so I thought I'd ask someone who is obviously very invested in clustering and the MS storage space technology.

    Thanks!

    ReplyDelete
  4. The solution listed here has each node connected via DAS SAS (Direct Attached Storage via SAS).

    There are two cables per node. Each cable has four 6Gbps SAS connections. So, each node has an aggregate of 48Gbps of virtually no latency bandwidth between the node and the storage in the JBOD.

    With MPIO enabled, Least Blocks set, we end up with an aggregate bandwidth of 96Gbps.

    There is _no_ way to do that with current fabrics short of Mellanox's new ConnectX-4 at 100Gbps across Ethernet.

    ReplyDelete
  5. Understood. But even with 96Gbps connectivity to the JBOD isn't the JBOD itself, robust as it may be, still a "single point of failure"? Are there not advantages to a "shared nothing" approach over each node connecting to a central JBOD?

    Further research reveals that "shared nothing" is not possible with Clustered Storage Spaces in Server 2012 R2 but it looks like that is slated for Server 2016... Is that something you would consider for Hyper-V clustering in the SMB space as the feature comes available?

    ReplyDelete
  6. The single point of failure is, as it has always been, the drives. The JBOD has a dual plane for data right to the dual ports on each drive.

    Shared Nothing requires a fabric between the nodes. That's expensive. RDMA is preferred to reduce latency.

    Storage Spaces Direct (S2D) is Microsoft's shared nothing in the upcoming release of 2016. It requires a full RDMA fabric between the nodes to function plus a stated minimum of four nodes.

    The solution in this blog requires four SAS cables. That's it. We even have a wiring guide with pictures!

    In our experiences building out solutions for small to medium hosting companies we start off with a similar arrangement to this one for our Scale-Out File Server backend storage. 10GbE between them and the Hyper-V compute cluster.

    We always need more compute before even looking at changes to the storage cluster.

    ReplyDelete
  7. I really like this solution, its nice and simple. I have a couple of questions, is there any reason why you are using a 3-way mirror? Also how many disks do you have in total? I'm thinking about building a 2 node hyper-v cluster using HP DL360 gen9s and D3700 enclosure all of which is on the Microsoft HCL for storage spaces.

    ReplyDelete
  8. jah,

    We use 3-way mirror so that we have two disks worth of resilience. We also try and leave at least 2 disks worth + 50GB of free space to allow SS to restore a failed disk into free space. This keeps our two disk resilience while we replace the disk.

    ReplyDelete

NOTE: All comments are moderated.