Saturday, 4 February 2017

Hyper-V Compute, Storage Spaces Storage, and S2D Hyper-Converged Solutions

Lately, we here at MPECS Inc. have been designing, implementing, servicing, and supporting highly available solutions for hyper-converged, Hyper-V compute, Storage Spaces storage, and lab environments along with standalone Hyper-V server solutions.

Here are some of the things we have been working on recently or have deployed within the last year.

Cluster Solutions

As has been posted here on our blog previously, we have invested heavily in the Windows Server 2016 story especially in Storage Spaces Direct (S2D) (S2D blog posts):

image

Proof-of-Concept (PoC) Storage Spaces Direct (S2D) Hyper-Converged or S2D SOFS cluster solution

The above Storage Spaces Direct PoC, based on the Intel Server System R2224WTTYSR, provides us with the necessary experience to deliver the hundreds of thousands of real every-day IOPS that our client solutions require. We can tailor our solution for a graphics firm to allow for multiple 10GbE data reads to their user’s systems or an engineering and architectural firm that requires high performance storage for their rendering farms.

Another Storage Spaces Direct PoC we are working on is the Kepler-47 (TechNet Blog Post):

image

Proof-of-Concept Storage Spaces Direct 2-Node for under $8K with storage!

Our goal for Kepler-47 is to deploy this solution into all clients that were normally deploying a single or dual Hyper-V server setup with Hyper-V Replica. Our recipe includes Intel 3700, 3600, and 3500 series SSDs, SuperMicro Mini-ITX Intel Xeon Processor E5-1200v5 series board, the 8-bay chassis, and Mellanox ConnectX-3 for direct connected RDMA East-West traffic. Cost for the 2-node cluster is about the same as one bigger Intel Server System that would run a client’s entire virtualization stack.

image

2016: Deployed 2 Node SOFS via Quanta JB4602 with ~400TB Storage Spaces Parity

In the late summer of 2016 we deployed the above SOFS cluster with the ultimate aim of adding three more JBODs for over 1.6PB (Petabytes) of very cost efficient Storage Spaces Parity storage for our client’s video and image files archive. The solution utilizes four 10GbE paths per node and SMB Multichannel to provide robust access to the files on the cluster. Six HGST SAS SSDs provide the needed high-speed cache for writes to the cluster.

Our smallest cluster client is a 15 seat accounting firm with a 2 node clustered Storage Spaces and Hyper-V cluster (our blog post). Some of our largest clients are SME hosting companies with hundreds of tenants and VMs running on SOFS storage and Hyper-V compute clusters.

We can deploy highly available solutions for $6K in hardware and up rendering standalone Hyper-V or VMware solutions moot!

Server and Storage Hardware

We primarily utilize Intel Server Systems and Storage as Intel’s support is second to none and our solution price points become way more than competitive to equivalent Tier 1 solutions. When required, we utilize Dell Server Systems and Storage for solutions that require a 4-hour on-site warranty over 3 to 5 years or more.

Our primary go-to for disaggregated SOFS cluster storage (Storage Nodes + Direct Attached Storage JBOD(s)) are Quanta QCT JBODs and DataON Storage JBODs. We’ve had great success with both company’s storage products.

For drives we deploy Intel NVMe PCIe and 2.5”, Intel SATA SSDs, and HGST SAS SSDs. We advise being very aware of each JBOD vendor’s Hardware Compatibility List (HCL) before jumping on just any SAS SSD listed in the Windows Server Catalog Storage Spaces Approved list (Microsoft Windows Server Catalog Site for Storage Spaces).

Important: Utilizing just any vendor’s drive in a Storage Spaces or Storage Spaces Direct setting can be a _costly_ error! One needs to do a lot of homework before deploying any solution into production. BTDT (Been There Done That)

The spinning media we use depends on the hyper-converged or storage solution we are deploying and the results of our thorough testing.

Network Fabrics

In a Storage Spaces Direct (S2D) setting our East-West (node to node) fabric is 10Gb to 100Gb Mellanox with RDMA via RoCEv1 and RoCEv2 (RDMA over Converged Ethernet) (our blog post) depending on the Mellanox NIC ConnectX version. We also turn to RoCE for North-South (Compute to Storage) for our disaggregated cluster solutions.

For 10GbE starter solutions for both storage and compute network the NETGEAR XS716T is the go-to switch. We always deploy either the storage to compute or the workload Hyper-V virtual switch in pairs to provide network resilience. Their switches are very well priced for the entry-level to mid-level solutions we deploy.

Cluster Lab

It’s no secret that we invest a lot in our client solution labs and network shadow solutions (our blog post). It is a point of principle that we make sure our solutions work as promised _before_ we would even consider selling them to our clients.

One does not need to look far for five figure, six figure, seven figure, or more solution failures. Recent catastrophic failures at the Australian Tax Office (Bing search) or 123-reg (Bing search) come to mind. It’s not difficult to find stories of very expensive solutions failing to deliver on their big promises with a big price tag.

The onus is on us to make sure we can under promise and over deliver on every solution!

Our Solutions

We can deliver a wide variety of solutions with the following being a partial list.

  • Storage Spaces Direct (S2D)
    • 2 nodes to 16 Nodes
    • Hyper-Converged running both compute and storage
    • SOFS mode to provide high IOPS storage
    • Hardware agnostic solution sets
    • Host UPDs in Azure, on VMware, or our solutions
  • Scale-Out File Server Clusters (2 nodes to 5 1 JBOD or more) 
    • IOPS tuned for intended workload performance
    • Large Volume backup and archival storage
    • Multiple Enclosure Resilience for additional redundancy
  • Hyper-V Compute Clusters (2 nodes to 64)
    • Tuned to workload type
    • Tuned for workload density
  • Clustered Storage Spaces (2 nodes + 1 JBOD)
    • Our entry-level go-to for small to medium business
    • Kepler-47 fits in this space too
  • RDMA RoCE via Mellanox Ethernet Fabrics for Storage <—> Compute
    • We can deploy 10Gb to 100Gb of RDMA fabric
    • High-Performance storage to compute
    • Hyper-converged East-West fabrics
  • Shadow Lab for production environments
    • Test those patches or application updates on a shadow lab
  • Learning Lab
    • Our lab solutions are very inexpensive
    • Four node S2D cluster that fits into a carry-on
    • Can include an hour or more for direct one-on-one or small-group learning
      • Save _lots_ of time sifting through all the chaff to build that first cluster
  • Standalone Hyper-V Servers
    • We can tailor and deliver standalone Hyper-V servers
    • Hyper-V Replica setups to provide some resilience
    • Ready to greenfield deploy, migrate from existing, or side-by-side migrate

Our solutions arrive at our client’s door ready to deploy production or lab workloads. Just ask us!

Or, if you need help with an existing setup we’re here. Please feel free to reach out.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

3 comments:

  1. Hi Philip,

    you are a bit more ahead that me with this, but I too stumbled across the netgear XS716T range earlier today.

    Do you inter-connect these to each other for failover? I'm just trying to get my head around the physical networking requirements.

    e.g. I have a core switch for client connections but it is all 1GB ports. So i was thinking of the netgears you mentioned with a 4 node S2D cluster going into those. However I'm not clear if I need to have a direct link between both the netgears? Obv I wouldnt' want the 10GB SMB traffic making its way over the 1gb switch....

    any help appreciated. I think I am over thinking it

    ReplyDelete
  2. flinty,

    On the storage side of things we don't connect the switches. On the virtual switch side of things we'd LAG a couple of passive 10GbE copper cables between them.

    For S2D in a four node setting two SX1012X Mellanox switches and two Mellanox 10GbE NICs per node for East-West. Then use two XS716T switches across two X540-T2 10GbE NICs (1 port per switch) with the dual LAG connect.

    I suggest a Cisco SG500X series switch to give you one or two 10GbE uplinks.

    ReplyDelete

NOTE: All comments are moderated.