Wednesday, 25 April 2018

Working with and around the Cloud Kool-Aid

The last year and a half have certainly had their challenges. I've been on a road of both discovery and of recovery after an accident in November of 2016 (blog post).

Most certainly one of the discoveries is the amount of tolerance for fluff, especially marketing fluff, has been greatly reduced. Time is precious, even more so when one's faculties can be limited by head injury. :S

Microsoft's Cloud Message

It was during one of the last feedback sessions at MVP Summit 2018 that a startling realization came about: There's still anger, and to some degree bitterness, towards Microsoft and the cloud messaging of the last ten to twelve years. My session at SMBNation 2012 had some glimpses into that anger and struggle about our business and its direction.

After the MVP Summit 2018 session, when discussing it with a Microsoft employee that I greatly respect, his response to my apology for the glimpse into my anger and bitterness was, "You have nothing to apologize for". That affirmation brought a lot home.

One realization is this: The messaging from Microsoft, and others, around Cloud has not changed. Not. One. Bit.

That messaging started out oh so many years ago as, "Your I.T. Pro Business is going to die. Get over it" to paraphrase Microsoft's message to change business models or else when BPOS was launched.

The messaging was "successful" to some degree as the number of I.T. Pro consultants and small businesses that hung up their guns during that first four to six year period was substantial.

And yet, it wasn't as much of the SMB focused Microsoft Partner network basically left Cloud sales off the table when dealing with their clients.

Today, the content of the message and to some degree the method of delivering the message may be somewhat masked but it is still the same: Cloud or die.

At this last MVP Summit yet another realization came when listening to a fellow MVP and some Blue Badges (Microsoft employees) discussing various things around Cloud and Windows. It had never occurred to me to consider that the pain we were feeling out on the street would also be had within Microsoft and to some degree other vendors adopting a Cloud service.

The recent internal shuffle in Microsoft really brought that home.

On-Premises, Hybrid, and/or Cloud

We have a lot of Open Value Agreements in place to license our client's on-premises solution sets.

Quite a few of them came up for renewal this spring. Our supplier Microsoft licensing contact, and the contractor (v-) that kept calling, were trying to push us into Cloud Solution Provider (CSP) for all of our client's licensing.

Much of what was said in those calls was:

  • Clients get so much more in features
  • Clients get access anywhere
  • Clients are so much more agile.
  • Blah, blah, blah
  • Fluff, fluff, fluff

The Cloud Kool-Aid was being poured out by the decalitre. ;)

So, our response was, "Let's about our Small Business Solution (SBS)" and it's great features and benefits and how our clients have full features both on-premises, via the Internet, or anything in-between. And, oh, it's location and device agnostic. We can also run it on-premises or in someone else's Cloud.

That usually led to some sort of stunned silence on the other end of the phone.

It's as if the on-premises story has somehow been forgotten or folks have developed selective amnesia around it.

What's neat though is that our on-premises highly available solutions are selling really well especially for folks that want cloud-like resilience for their own business.

That being said, there _is_ a place for Cloud.

As a rule, Cloud is a great way to extend on-premises resources for companies that experience severe business swings such as construction companies that have slowdowns due to winter. The on-premises solution set can run the business through the quieter months then things get scaled-up during summer in the Cloud. In this case the Cloud spend is equitable.

Business Principled Clarity

There are two very clear realities for today's I.T. Pro and SMB/SME I.T. Business:

  1. On-Premises is not going away
  2. Building a business around Cloud is possible but difficult

The on-premises story is not going to change. One can repeat the Cloud message over and over and to some degree it becomes "truth". That's an old adage. However, the realities on the ground remain ... despite the messaging.

Okay, so maybe in the smaller 10 or less seat business where an all-in for Cloud may make sense (make sure to add all of those bills up and be sitting when doing so!).

That being said, our smallest High Availability client is 15 seats with a disaggregate converged cluster. That was before our Storage Spaces Direct Kepler-47 was finalized as that solution starts at a third of the cost.

For the on-premises story there are two primary principles operating here:

  1. The client wants to own it
  2. The client wants full control over their data and its access

Cloud vendors are not obligated, and in many cases can't say anything, when law enforcement shows up to either snoop or even, in some cases, to remove the vendor's physical server systems.

Many businesses are very conscious of this fact. Plus, many governments have a deep reach into other countries as the newly minted, as of this writing, EU privacy laws seem to be demonstrating.

Now, as far as building a business around another's Cloud offerings there are two ways that we see that happening with some success:

  1. Know a Cloud Vendor's products through and through
  2. Build a MSP (Managed Service Provider) business supporting endpoints

The first seems to be really big right now. There's a lot of I.T. companies out there selling cloud with no idea of how to put it all together. The companies that do know how to put it all together are growing in leaps and bounds.

The MSP method is, and has been, a way to keep that monthly income going. But, don't count on it being there for too much longer as _all_ Cloud vendors are looking to kill the managed endpoint in some way.

Our Direction

So, where do we fit in all of this?

Well, our business strategy has been pretty straightforward:

  1. Keep developing and providing cloud-like services on-premises with cloud-like resilient solutions for our clients
  2. Hybrid our on-premises solutions with Cloud when the need is there
  3. Continue to help clients get the most out of their Cloud services
  4. Cultivate our partnerships with SMB/SME I.T. organizations needing HA Solutions

We have managed to re-work our business model over the last five to ten years and we've been quite successful at it. Though, it is still a work in progress and probably will remain so given the nature of our industry.

We're pretty sure we will remain successful at it as we continue to put a lot of thought and energy into building and keeping our clients and contractors happy.

Ultimately, that goal has not changed in all of the years we've been in business.

We small to medium I.T. shops have the edge over every other I.T. provider out there.

"How is that?", you might ask.

Well, we _know_ how to run a small to medium business and all of the good and bad that comes with it.

That translates into great products and services to our fellow SMB/SME business clients. It really is that easy.

The hard part is staying on top of all of the knowledge churn happening in our field today.

Conclusion

Finally, as far as the anger, and to some degree bitterness, goes: Time. It will take time before it is fully dealt with.

In the mean time ...

A friend of mine, Tim Barrett did this comic many years ago (image credit to NoGeekLeftBehind.com):

image

The comic definitely puts an image to the Cloud messaging and its results. :)

Let's continue to build our dreams doing what we love to do.

Have a fantastic day and thanks for reading!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday, 23 January 2018

Storage Spaces Direct (S2D): Sizing the East-West Fabric & Thoughts on All-Flash

Lately we've been seeing some discussion around the amount of time required to resync a S2D node's storage after it has come back from a reboot for whatever reason.

Unlike a RAID controller where we can tweak rebuild priorities, S2D does not offer the ability to do so.

It is with very much a good thing that the knobs and dials are not exposed for this process.

Why?

Because, there is a lot more going on under the hood than just the resync process.

While it does not happen as often anymore, there were times where someone would reach out about a performance problem after a disk had failed. After a quick look through the setup the Rebuild Priority setting turned out to be the culprit as someone had tweaked it from its usual 30% of cycles to 50% or 60% or even higher thinking that the rebuild should be the priority.

S2D Resync Bottlenecks

There are two key bottleneck areas in a S2D setup when it comes to resync performance:
  1. East-West Fabric
    • 10GbE with or without RDMA?
    • Anything faster than 10GbE?
  2. Storage Layout
    • Those 7200 RPM capacity drives can only handle ~110MB/Second to ~120MB/Second sustained
The two are not the mutually exclusive culprit depending on the setup as they both can play together to limit performance.

The physical CPU setup may also come into play but that's for another blog post. ;)

S2D East-West Fabric to Node Count

Let's start with the fabric setup that the nodes use to communicate with each other and pass storage traffic along.

This is a rule of thumb that was originally born out of a conversation at a MVP Summit a number of years back with a Microsoft fellow that was in on the S2D project at the beginning. We were discussing our own Proof-of-Concept that we had put together based on a Mellanox 10GbE and 40GbE RoCE (RDMA over Converged Ethernet) fabric. Essentially, at 4-nodes a 40GbE RDMA fabric was _way_ too much bandwidth.

Here's the rule of thumb we use for our baseline East-West Fabric setups. Note that we always use dual-port NICs/HBAs.
  • Kepler-47 2-Node
    • Hybrid SSD+HDD Storage Layout with 2-Way Mirror
    • 10GbE RDMA direct connect via Mellanox ConnectX-4 LX
    • This leaves us the option to add one or two SX1012X Mellanox 10GbE switches when adding more Kepler-47 nodes
  • 2-4 Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
    • 2-Way Mirror: 2-Node Hybrid SSD+HDD Storage Layout
    • 3-Way Mirror: 3-Node Hybrid SSD+HDD Storage Layout
    • Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
    • 2x Mellanox SX1012X 10GbE Switches
      • 10GbE RDMA direct connect via Mellanox ConnectX-4 LX
  • 4-7 Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
    • 4-7 Nodes: 3-Way Mirror: 4+ Node Hybrid SSD+HDD Storage Layout
    • 4+ Nodes: Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
    • 4+ Nodes: Mirror-Accelerated Parity (MAP): All-Flash NVMe cache + SSD
    • 2x Mellanox Spectrum Switches with break-out cables
      • 25GbE RDMA direct connect via Mellanox ConnectX-4/5
      • 50GbE RDMA direct connect via Mellanox ConnectX-4/5
  • 8+ Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
      • 4-7 Nodes: 3-Way Mirror: 4+ Node Hybrid SSD+HDD Storage Layout
      • 4+ Nodes: Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
      • 4+ Nodes: Mirror-Accelerated Parity (MAP): All-Flash NVMe cache + SSD
      • 2x Mellanox Spectrum Switches with break-out cables
        • 50GbE RDMA direct connect via Mellanox ConnectX-4/5
        • 100GbE RDMA direct connect via Mellanox ConnectX-4/5
    Other than the Kepler-47 setup we always have at least a pair of Mellanox ConnectX-4 NICs in each node for East-West traffic. It's our preference to separate out the storage traffic and the rest.

    All-Flash Setups

    There's a lot of talk in the industry about all-flash.

    It's supposed to solve the biggest bottleneck of them all: Storage!

    The catch is, bottlenecks are moving targets.




    Drop in an all-flash array of some sort and all of a sudden the storage to compute fabric becomes the target. Then, it's the NICs/HBAs on the storage _and_ compute nodes, and so-on.

    If you've ever changed a single coolant hose in an older high miler car you'd see what I mean very quickly. ;)

    IMNSHO, at this point in time, unless there is a very specific business case for all-flash and the fabric in place allows for all that bandwidth with virtually zero latency, all-flash is a waste of money.

    One business case would be for a cloud services vendor that wants to provide a high IOPS and vCPU solution to their clients. So long as the fabric between storage and compute can fully utilize that storage and the market is there the revenues generated should more than make up for the huge costs involved.

    Using all-flash as a solution to a poorly written application or set of applications is questionable at best. But, sometimes, it is necessary as the software vendor has no plans to re-work their applications to run more efficiently on existing platforms.

    Caveat: The current PCIe bus just can't handle it. Period.

    A pair of 100Gb ports on one NIC/HBA can't be fully utilized due to the PCIe bus bandwidth limitation. Plus, we deploy with two NICs/HBAs for redundancy.

    Even with the addition of more PCIe Gen 3 lanes in the new Intel Xeon Scalable Processor Family we are still quite limited in the amount of data that can be moved about on the bus.

    S2D Thoughts and PoCs

    The Storage Spaces Direct (S2D) hyper-converged or SOFS only solution set can be configured and tuned for a very specific set of client needs. That's one of its beauties.

    Microsoft remains committed to S2D and its success. Microsoft Azure Stack is built on S2D so their commitment is pretty clear.

    So is ours!

    Proof-of-Concept (PoC) Lab
    S2D 4-Node for Hyper-Converged and SOFS Only
    Hyper-V 2-Node for Compute to S2D SOFS
    This is the newest edition to our S2D product PoC family:
    Kepler-47 S2D 2-Node Cluster

    The Kepler-47 picture is our first one. It's based on Dan Lovinger's concept we saw at Ignite Atlanta a few years ago. Components in this box were similar to Dan's setup.

    Our second generation Kepler-47 is on the way to being built now.
    Kepler-47 v2 PoC Ongoing Build & Testing

    This new generation will have an Intel Server Board DBS1200SPLR with an E3-1270v6, 64GB ECC, Intel JBOD HBA I/O Module, TPM v2, and Intel RMM. OS would be installed on a 32GB Transcend 2242 SATA SSD. Connectivity between the nodes will be Mellanox ConnectX-4 LX running at 10GbE with RDMA enabled.

    Storage in Kepler-47 v2 would be a combination of one Intel DC P4600 Series PCIe NVMe drive for cache, two Intel DC S4600 Series SATA SSDs for performance tier, and six HGST 6TB 7K6000 SAS or SATA HDDs for capacity. The PCIe NVMe drive will optional due it is cost.

    We already have one or two client/customer destinations for this small cluster setup.

    Conclusion

    Storage Spaces Direct (S2D) rocks!

    We've invested _a lot_ of time and money in our Proof-of-Concepts (PoCs). We've done so because we believe the platform is the future for both on-premises and data centre based workloads.

    Thanks for reading! :)

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Web Site
    Our Cloud Service