Wednesday, 20 June 2018

Windows Server: Black Screen with "Windows logon process failed to spawn user application."

After demoting a DC we were not able to get to the desktop with a black screen showing up and that was it.

Try and get Task Manager up and running produced the following in the server's Event Logs:

Log Name:      Application
Source:        Microsoft-Windows-Winlogon
Date:          6/20/2018 11:19:06 AM
Event ID:      4006
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      SERVER.DOMAIN.COM
Description:
The Windows logon process has failed to spawn a user application. Application name: launchtm.exe. Command line parameters: launchtm.exe /3 .

In the end the solution ended up being to add the local administrator account to the local Users group after hitting CTRL+ALT+DEL/END to click Log Off/Sign Out.

Once we signed back in we got to the server's desktop and were able to continue with it's removal from the domain.

EDIT: Note that the change was done from a DC via Active Directory Users and Computers.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Thursday, 7 June 2018

Exchange 2013+: Set Up a Receive Connector for MFP/Copier/Device Relay

The following are the two steps required to enable an internal anonymous relay in Exchange 2013/2016/20*.

Step 1: Create the Receive Connector

New-ReceiveConnector –Name MFP-APP-AnonRelay –Usage Custom –Bindings 0.0.0.0:25 –RemoteIPRanges 192.168.25.1-192.168.25.10,192.168.25.225-192.168.25.254 –Comment “Allows anonymous relay” –TransportRole FrontEndTransport –AuthMechanism None –PermissionGroups AnonymousUsers

Variables:

  • -Name: Change this if needed but must match for both steps
  • -RemoteIPRanges: Only put trusted device IP addresses in this section

Once the receive connector is set up it can be managed via EAC.

Step 2: Allow Anonymous Rights

Get-ReceiveConnector “MFP-APP-AnonRelay” | Add-ADPermission -User “NT AUTHORITY\ANONYMOUS LOGON” -ExtendedRights “Ms-Exch-SMTP-Accept-Any-Recipient”

Variable:

  • The Receive Connector name must match the one set in Step 1

Conclusion

Once the above steps are set up there is no need to set a username and password on any device that has an allowed IP.

For obvious reasons one should never put an Internet IP address in this rule! But, that being said, one always denies all SMTP 25/587 inbound traffic to a third party sanitation provider's subnets right (we use ExchangeDefender for our own and our client's needs)?

Also, this setup is for on-premises Exchange.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Thursday, 31 May 2018

OS Guide: Slipstream Updates and Drivers Using DISM and OSCDImg

We've posted another guide to our Web site.

Using the script on this page in an elevated CMD allows us to take the base Install.WIM for Windows Server 2016 and slipstream the latest Cumulative Update into it.

Then, the script copies the updated Install.WIM into two separate folders where we keep two sets of installer files/folders. One is a Bare version that has only the Windows installer files. The other contains a whole host of drivers, BIOS and firmware updates, and a copy of the newly minted .ISO file. We use the FULL version for our USB flash drives (blog post) that get permanently plugged into all server systems we deploy.

This script is constantly updated.

Another will be posted at a later date that also includes the ability to update the Install.WIM file with drivers.

UPDATE 2018-06-04: Fixed the link!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Wednesday, 9 May 2018

Remote Desktop Client: An authentication error has occurred. *Workaround

Updates last night included one for CredSSP CVE-2018-0886.

For those of us that are hesitant to patch our servers the instant a patch is available we'll be seeing RD Clients unable to connect for the period prior to our regression testing and release cycle.

Remote Desktop Connection

An authentication error has occurred.
The function requested is not supported.

Remote Computer: SERVERNAME
This could be due to CredSSP encryption oracle remediation.
For more information, see https://go.microsoft.com/fwlink/?linkid=866660

For now, the workaround on the remotely connecting RD Clients is to set the following registry key:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP]

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters]
"AllowEncryptionOracle"=dword:00000002

Copy and paste the above into Notepad and Save As "CredSSP.REG" in a quickly accessible location.

Double click on the created file and MERGE. An elevated Registry Editor session would also allow for import via the FILE menu.

Once the above registry setting is in-place reboot the client machine and the connection should work.

Happy Patching! :)

UPDATE 2018-05-09 @ 10:47 MST: A caveat:

It is better to update the server backend, if possible, before making the above registry change.

If that is _not_ possible, then after the updates have been applied on the server(s) make sure to _change_ the registry setting to its most secure setting.

UPDATE 2018-05-10 @ 17:38 MST:

Update sources:

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday, 1 May 2018

PowerShell Guide Series: Storage Spaces Direct PowerShell Node Published

Apologies for the double post, one of the bulleted links was broken. :(

One of the difficult things about putting our setup guides on our blog was the fact that when we changed them, which was frequent, it became a bit of a bear to manage.
So, we're going to be keeping a set up guides on our site to keep things simple.

The first of the series has been published here:

This guide is a walkthrough to set up a 2-Node Storage Spaces Direct (S2D) cluster node from scratch. There are also steps in there for configuring RoCE to allow for more than two nodes if there is a need.
We will be updating the existing guides on a regular basis but also publishing new ones as we go along.

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

PowerShell Guide Series: Storage Spaces Direct PowerShell Node Published

One of the difficult things about putting our setup guides on our blog was the fact that when we changed them, which was frequent, it became a bit of a bear to manage.
So, we're going to be keeping a set up guides on our site to keep things simple.
The first of the series has been published here:
This guide is a walkthrough to set up a 2-Node Storage Spaces Direct (S2D) cluster node from scratch. There are also steps in there for configuring RoCE to allow for more than two nodes if there is a need.
We will be updating the existing guides on a regular basis but also publishing new ones as we go along.
Thanks for reading!
Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Wednesday, 25 April 2018

Working with and around the Cloud Kool-Aid

The last year and a half have certainly had their challenges. I've been on a road of both discovery and of recovery after an accident in November of 2016 (blog post).

Most certainly one of the discoveries is the amount of tolerance for fluff, especially marketing fluff, has been greatly reduced. Time is precious, even more so when one's faculties can be limited by head injury. :S

Microsoft's Cloud Message

It was during one of the last feedback sessions at MVP Summit 2018 that a startling realization came about: There's still anger, and to some degree bitterness, towards Microsoft and the cloud messaging of the last ten to twelve years. My session at SMBNation 2012 had some glimpses into that anger and struggle about our business and its direction.

After the MVP Summit 2018 session, when discussing it with a Microsoft employee that I greatly respect, his response to my apology for the glimpse into my anger and bitterness was, "You have nothing to apologize for". That affirmation brought a lot home.

One realization is this: The messaging from Microsoft, and others, around Cloud has not changed. Not. One. Bit.

That messaging started out oh so many years ago as, "Your I.T. Pro Business is going to die. Get over it" to paraphrase Microsoft's message to change business models or else when BPOS was launched.

The messaging was "successful" to some degree as the number of I.T. Pro consultants and small businesses that hung up their guns during that first four to six year period was substantial.

And yet, it wasn't as much of the SMB focused Microsoft Partner network basically left Cloud sales off the table when dealing with their clients.

Today, the content of the message and to some degree the method of delivering the message may be somewhat masked but it is still the same: Cloud or die.

At this last MVP Summit yet another realization came when listening to a fellow MVP and some Blue Badges (Microsoft employees) discussing various things around Cloud and Windows. It had never occurred to me to consider that the pain we were feeling out on the street would also be had within Microsoft and to some degree other vendors adopting a Cloud service.

The recent internal shuffle in Microsoft really brought that home.

On-Premises, Hybrid, and/or Cloud

We have a lot of Open Value Agreements in place to license our client's on-premises solution sets.

Quite a few of them came up for renewal this spring. Our supplier Microsoft licensing contact, and the contractor (v-) that kept calling, were trying to push us into Cloud Solution Provider (CSP) for all of our client's licensing.

Much of what was said in those calls was:

  • Clients get so much more in features
  • Clients get access anywhere
  • Clients are so much more agile.
  • Blah, blah, blah
  • Fluff, fluff, fluff

The Cloud Kool-Aid was being poured out by the decalitre. ;)

So, our response was, "Let's about our Small Business Solution (SBS)" and it's great features and benefits and how our clients have full features both on-premises, via the Internet, or anything in-between. And, oh, it's location and device agnostic. We can also run it on-premises or in someone else's Cloud.

That usually led to some sort of stunned silence on the other end of the phone.

It's as if the on-premises story has somehow been forgotten or folks have developed selective amnesia around it.

What's neat though is that our on-premises highly available solutions are selling really well especially for folks that want cloud-like resilience for their own business.

That being said, there _is_ a place for Cloud.

As a rule, Cloud is a great way to extend on-premises resources for companies that experience severe business swings such as construction companies that have slowdowns due to winter. The on-premises solution set can run the business through the quieter months then things get scaled-up during summer in the Cloud. In this case the Cloud spend is equitable.

Business Principled Clarity

There are two very clear realities for today's I.T. Pro and SMB/SME I.T. Business:

  1. On-Premises is not going away
  2. Building a business around Cloud is possible but difficult

The on-premises story is not going to change. One can repeat the Cloud message over and over and to some degree it becomes "truth". That's an old adage. However, the realities on the ground remain ... despite the messaging.

Okay, so maybe in the smaller 10 or less seat business where an all-in for Cloud may make sense (make sure to add all of those bills up and be sitting when doing so!).

That being said, our smallest High Availability client is 15 seats with a disaggregate converged cluster. That was before our Storage Spaces Direct Kepler-47 was finalized as that solution starts at a third of the cost.

For the on-premises story there are two primary principles operating here:

  1. The client wants to own it
  2. The client wants full control over their data and its access

Cloud vendors are not obligated, and in many cases can't say anything, when law enforcement shows up to either snoop or even, in some cases, to remove the vendor's physical server systems.

Many businesses are very conscious of this fact. Plus, many governments have a deep reach into other countries as the newly minted, as of this writing, EU privacy laws seem to be demonstrating.

Now, as far as building a business around another's Cloud offerings there are two ways that we see that happening with some success:

  1. Know a Cloud Vendor's products through and through
  2. Build a MSP (Managed Service Provider) business supporting endpoints

The first seems to be really big right now. There's a lot of I.T. companies out there selling cloud with no idea of how to put it all together. The companies that do know how to put it all together are growing in leaps and bounds.

The MSP method is, and has been, a way to keep that monthly income going. But, don't count on it being there for too much longer as _all_ Cloud vendors are looking to kill the managed endpoint in some way.

Our Direction

So, where do we fit in all of this?

Well, our business strategy has been pretty straightforward:

  1. Keep developing and providing cloud-like services on-premises with cloud-like resilient solutions for our clients
  2. Hybrid our on-premises solutions with Cloud when the need is there
  3. Continue to help clients get the most out of their Cloud services
  4. Cultivate our partnerships with SMB/SME I.T. organizations needing HA Solutions

We have managed to re-work our business model over the last five to ten years and we've been quite successful at it. Though, it is still a work in progress and probably will remain so given the nature of our industry.

We're pretty sure we will remain successful at it as we continue to put a lot of thought and energy into building and keeping our clients and contractors happy.

Ultimately, that goal has not changed in all of the years we've been in business.

We small to medium I.T. shops have the edge over every other I.T. provider out there.

"How is that?", you might ask.

Well, we _know_ how to run a small to medium business and all of the good and bad that comes with it.

That translates into great products and services to our fellow SMB/SME business clients. It really is that easy.

The hard part is staying on top of all of the knowledge churn happening in our field today.

Conclusion

Finally, as far as the anger, and to some degree bitterness, goes: Time. It will take time before it is fully dealt with.

In the mean time ...

A friend of mine, Tim Barrett did this comic many years ago (image credit to NoGeekLeftBehind.com):

image

The comic definitely puts an image to the Cloud messaging and its results. :)

Let's continue to build our dreams doing what we love to do.

Have a fantastic day and thanks for reading!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday, 23 January 2018

Storage Spaces Direct (S2D): Sizing the East-West Fabric & Thoughts on All-Flash

Lately we've been seeing some discussion around the amount of time required to resync a S2D node's storage after it has come back from a reboot for whatever reason.

Unlike a RAID controller where we can tweak rebuild priorities, S2D does not offer the ability to do so.

It is with very much a good thing that the knobs and dials are not exposed for this process.

Why?

Because, there is a lot more going on under the hood than just the resync process.

While it does not happen as often anymore, there were times where someone would reach out about a performance problem after a disk had failed. After a quick look through the setup the Rebuild Priority setting turned out to be the culprit as someone had tweaked it from its usual 30% of cycles to 50% or 60% or even higher thinking that the rebuild should be the priority.

S2D Resync Bottlenecks

There are two key bottleneck areas in a S2D setup when it comes to resync performance:
  1. East-West Fabric
    • 10GbE with or without RDMA?
    • Anything faster than 10GbE?
  2. Storage Layout
    • Those 7200 RPM capacity drives can only handle ~110MB/Second to ~120MB/Second sustained
The two are not the mutually exclusive culprit depending on the setup as they both can play together to limit performance.

The physical CPU setup may also come into play but that's for another blog post. ;)

S2D East-West Fabric to Node Count

Let's start with the fabric setup that the nodes use to communicate with each other and pass storage traffic along.

This is a rule of thumb that was originally born out of a conversation at a MVP Summit a number of years back with a Microsoft fellow that was in on the S2D project at the beginning. We were discussing our own Proof-of-Concept that we had put together based on a Mellanox 10GbE and 40GbE RoCE (RDMA over Converged Ethernet) fabric. Essentially, at 4-nodes a 40GbE RDMA fabric was _way_ too much bandwidth.

Here's the rule of thumb we use for our baseline East-West Fabric setups. Note that we always use dual-port NICs/HBAs.
  • Kepler-47 2-Node
    • Hybrid SSD+HDD Storage Layout with 2-Way Mirror
    • 10GbE RDMA direct connect via Mellanox ConnectX-4 LX
    • This leaves us the option to add one or two SX1012X Mellanox 10GbE switches when adding more Kepler-47 nodes
  • 2-4 Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
    • 2-Way Mirror: 2-Node Hybrid SSD+HDD Storage Layout
    • 3-Way Mirror: 3-Node Hybrid SSD+HDD Storage Layout
    • Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
    • 2x Mellanox SX1012X 10GbE Switches
      • 10GbE RDMA direct connect via Mellanox ConnectX-4 LX
  • 4-7 Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
    • 4-7 Nodes: 3-Way Mirror: 4+ Node Hybrid SSD+HDD Storage Layout
    • 4+ Nodes: Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
    • 4+ Nodes: Mirror-Accelerated Parity (MAP): All-Flash NVMe cache + SSD
    • 2x Mellanox Spectrum Switches with break-out cables
      • 25GbE RDMA direct connect via Mellanox ConnectX-4/5
      • 50GbE RDMA direct connect via Mellanox ConnectX-4/5
  • 8+ Node 2U 24 2.5" or 12/16 3.5" Drives with Intel Xeon Scalable Processors
      • 4-7 Nodes: 3-Way Mirror: 4+ Node Hybrid SSD+HDD Storage Layout
      • 4+ Nodes: Mirror-Accelerated Parity (MAP): 4 Nodes Hybrid SSD+HDD Storage Layout
      • 4+ Nodes: Mirror-Accelerated Parity (MAP): All-Flash NVMe cache + SSD
      • 2x Mellanox Spectrum Switches with break-out cables
        • 50GbE RDMA direct connect via Mellanox ConnectX-4/5
        • 100GbE RDMA direct connect via Mellanox ConnectX-4/5
    Other than the Kepler-47 setup we always have at least a pair of Mellanox ConnectX-4 NICs in each node for East-West traffic. It's our preference to separate out the storage traffic and the rest.

    All-Flash Setups

    There's a lot of talk in the industry about all-flash.

    It's supposed to solve the biggest bottleneck of them all: Storage!

    The catch is, bottlenecks are moving targets.




    Drop in an all-flash array of some sort and all of a sudden the storage to compute fabric becomes the target. Then, it's the NICs/HBAs on the storage _and_ compute nodes, and so-on.

    If you've ever changed a single coolant hose in an older high miler car you'd see what I mean very quickly. ;)

    IMNSHO, at this point in time, unless there is a very specific business case for all-flash and the fabric in place allows for all that bandwidth with virtually zero latency, all-flash is a waste of money.

    One business case would be for a cloud services vendor that wants to provide a high IOPS and vCPU solution to their clients. So long as the fabric between storage and compute can fully utilize that storage and the market is there the revenues generated should more than make up for the huge costs involved.

    Using all-flash as a solution to a poorly written application or set of applications is questionable at best. But, sometimes, it is necessary as the software vendor has no plans to re-work their applications to run more efficiently on existing platforms.

    Caveat: The current PCIe bus just can't handle it. Period.

    A pair of 100Gb ports on one NIC/HBA can't be fully utilized due to the PCIe bus bandwidth limitation. Plus, we deploy with two NICs/HBAs for redundancy.

    Even with the addition of more PCIe Gen 3 lanes in the new Intel Xeon Scalable Processor Family we are still quite limited in the amount of data that can be moved about on the bus.

    S2D Thoughts and PoCs

    The Storage Spaces Direct (S2D) hyper-converged or SOFS only solution set can be configured and tuned for a very specific set of client needs. That's one of its beauties.

    Microsoft remains committed to S2D and its success. Microsoft Azure Stack is built on S2D so their commitment is pretty clear.

    So is ours!

    Proof-of-Concept (PoC) Lab
    S2D 4-Node for Hyper-Converged and SOFS Only
    Hyper-V 2-Node for Compute to S2D SOFS
    This is the newest edition to our S2D product PoC family:
    Kepler-47 S2D 2-Node Cluster

    The Kepler-47 picture is our first one. It's based on Dan Lovinger's concept we saw at Ignite Atlanta a few years ago. Components in this box were similar to Dan's setup.

    Our second generation Kepler-47 is on the way to being built now.
    Kepler-47 v2 PoC Ongoing Build & Testing

    This new generation will have an Intel Server Board DBS1200SPLR with an E3-1270v6, 64GB ECC, Intel JBOD HBA I/O Module, TPM v2, and Intel RMM. OS would be installed on a 32GB Transcend 2242 SATA SSD. Connectivity between the nodes will be Mellanox ConnectX-4 LX running at 10GbE with RDMA enabled.

    Storage in Kepler-47 v2 would be a combination of one Intel DC P4600 Series PCIe NVMe drive for cache, two Intel DC S4600 Series SATA SSDs for performance tier, and six HGST 6TB 7K6000 SAS or SATA HDDs for capacity. The PCIe NVMe drive will optional due it is cost.

    We already have one or two client/customer destinations for this small cluster setup.

    Conclusion

    Storage Spaces Direct (S2D) rocks!

    We've invested _a lot_ of time and money in our Proof-of-Concepts (PoCs). We've done so because we believe the platform is the future for both on-premises and data centre based workloads.

    Thanks for reading! :)

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Web Site
    Our Cloud Service

    Monday, 18 December 2017

    Cluster: Troubleshooting an Issue Using Failover Cluster Manager Cluster Events

    When we run into issues the first thing we can do is poll the nodes via the Cluster Events log in Failover Cluster Manager (FCM).

    1. Open Failover Cluster Manager
    2. Click on Cluster Events in the left hand column
    3. Click on Query
      • image
    4. Make sure the nodes are ticked in the Nodes: section
    5. In the Event Logs section:
      • Microsoft-Windows-Cluster*
      • Microsoft-Windows-FailoverClustering*
      • Microsoft-Windows-Hyper-V*
      • Microsoft-Windows-Network*
      • Microsoft-Windows-SMB*
      • Microsoft-Windows-Storage*
      • Microsoft-Windows-TCPIP*
      • Leave all defaults checked
      • OPTION: Hardware Events
    6. Critical, Error, Warning
    7. Events On
      • From: Events On: 2017-12-17 @ 0800
      • To: Events On: 2017-12-18 @ 2000
    8. Click OK
    9. Click Save Query As...
    10. Save it
      • Copy the resultant .XML file for use on other clusters
      • Edit the node value section to change the node designations or add more
    11. Click on Save Events As... in FCM to save the current list of events for further digging

    Use the Open Query option to get to the query .XML and tweak the dates for the current date and time, add specific Event IDs that we are looking for, and then click OK.

    We have FCM and Hyper-V RSAT installed on our cluster's physical DC by default.

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Web Site
    Our Cloud Service

    Saturday, 9 December 2017

    PowerShell TotD: Hyper-V Live Move a specific VHDX file

    There are times when we need to move one of two VHDX files associated with a VM.

    The following is the PowerShell to do so:

    Poll Hyper-V Host/Node for VM HDD Paths

    get-vm "*" | Select *path,@{N="HDD";E={$_.Harddrives.path}} | FL

    Move a Select VHDX

    Move-VMStorage -VMName VMName -VHDs @(@{"SourceFilePath" = "X:\Hyper-V\Virtual Hard Disks\VM-LALoB_D0-75GB.VHDX"; "DestinationFilePath" = "Y:\Hyper-V\Virtual Hard Disks\VM-LALoB_D0-75GB.VHDX"})

    Move-VMStorage Docs

    The Move-VMStorage Docs site. This site has the full syntax for the PowerShell command.

    Conclusion

    While the above process can be initiated in the GUI, PowerShell allows us to initiate a set of moves for multiple VMs. This saves on time bigtime versus mouse.

    By the way, TotD means: Tip of the Day.

    Thanks for reading! :)

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Web Site
    Our Cloud Service

    Thursday, 9 November 2017

    Intel Server System R2224WFTZS Integration & Server Building Thoughts

    We have a brand new Intel Server System R2224WFTZS that is the foundation for a mid to high performance virtualization platform.

    image

    Intel Server System R2224WFTZS 2U

    Below it sits one of our older lab Intel Server System SR2625URLX 2U. Note the difference in the drive caddy.

    That change is welcome as the caddy no longer requires a screwdriver to set the drive in place:

    image

    Intel 2.5" Tooless Drive Caddy

    What that means is the time required to get 24 drives installed in the caddies went from half an hour or more to five or ten minutes. That, in our opinion, is a great leap ahead!

    The processors for this setup are Intel Xeon Gold 6134s with 8 cores running at 3.2GHz with a peak of 3.7GHz. We chose the Gold 6134 as a starting place as most of the other CPUs have more than eight cores thus pushing up the cost of licensing Microsoft Windows Server Standard or Datacenter.

    image

    Intel Xeon Gold 6134, Socket, Heatsink, and Canadian Loonie $1 Coin

    The new processors are huge!

    The scale difference between the E3-1200 series, E5-2600 series is orders of magnitude larger. The jump in size reminds me of the Pentium Pro's girth next to the lesser desktop/server processors of the day.

    image

    Intel Xeon Processor E3-1270 sits on the Intel Xeon Gold 6134

    The server is nearly complete.

    image

    Intel Server System R2224WFTZS Build Complete

    Bill of Materials

    In this setup the server's Bill of Materials (BoM) is as follows:

    • (2) Intel Xeon Gold 6134
    • 384GB via 12x 32GB Crucial DDR4 LRDIMM
    • Intel Integrated RAID Module RMSP3CD080F with 7 Series Flash Cache Backup
    • Intel 12Gbps RAID Expander Module RES3TV360
    • (2) 150GB Intel DC S3520 M.2 SSDs for OS
    • (5) 1.9TB Intel DC S4600 SATA SSDs for high IOPS tier
    • (19) 1.8TB Seagate 10K SAS for low to mid IOPS tier
    • Second Power Supply, TPM v2, and RMM4 Module

    It's important to note that when setting up a RAID controller instead of a Host Bus Adapter (HBA) that does JBOD only we require the flash cache backup module. In this particular unit one needs to order the mounting bracket: AWTAUXBBUBKT

    I'm not sure why we missed that, but we've updated our build guides to reflect the need for it going forward.

    One other point of order is the rear 2.5" hot swap drive bay kit (A2UREARHSDK2) does not come installed from the factory in the R2224WFTZS as it did in the R2224WTTYS. I'm still not sold on M.2 for the host operating system as they are not hot swap capable. That means, if one dies we have to down a node in order to change it. With the rear hot swap bay we can do just that, swap out the 2.5" SATA SSD that's being used for the host OS.

    For the second set of two 10GbE ports we used an Intel X540-T2 PCIe add-in card as the I/O modules are not in the distribution channel as of this writing.

    NOTE: One requires a T30 hex screwdriver for the heatsinks! After installing the processor please make sure to start all four nuts prior to tightening. As a suggestion, from there snug each one up gradually starting with the two middle nuts then the outer nuts similar to the process for installing a head on an engine block. This process provides an even amount of pressure from the middle of the heatsink outwards.

    Firmware Notes

    Finally, make sure to update the firmware on all components before installing an operating system. There are some key fixes in the motherboard firmware updates as of this writing (BIOS 00.01.0009 ReadMe). Please make sure to read through to verify any caveats associated with the update process or the updates themselves.

    Next up on our build process will be to update all firmware in the system, install the host operating system and drivers, and finally run a burn-in process. From there, we'll run some tests to get a feel for the IOPS and throughput we can expect from the two RAID arrays.

    Why Build Servers?

    That's got to be the burning question on some minds. Why?

    The long and the short of it is because we've been doing so for so many years it's a hard habit to kick. ;)

    Actually, the reality is much more mundane. We continue to be actively involved in building out our own server solutions for a number of reasons:

    • We can fine tune our solutions to specific customer needs
      • Need more IOPS we can do that
      • Need more throughput we can do that
      • Need a blend of the two as is the case here, then we can do that too.
    • Direct contact with firmware issues, interoperability, and stability
      • Making the various firmware bits play nice together can be a challenge
    • Driver issues, interoperability, and stability
      • Drivers can be quite finicky about what's in the box with them
    • Hardware interoperability
      • Our parts bin is chalk full of parts that refused to work with one another
      • On the other hand our solution sets are known good configurations
    • Cost
      • Our server systems are a fraction of the cost of Tier 1
    • Overall system configuration
      • As Designed Stability out of the box
    • He said She said
      • Since we test our systems extensively prior to deploying we know them well
      • Software Vendors that point the finger have no leg to stand on as we have plenty of charts and graphs
      • Performance issues are easier to pinpoint in software vendor's products
      • We remove the guesswork around an already configured Tier 1 box

    Business Case

    The business case is fairly simple: There are _a lot_ of folks out there that do not want to cloud their business. We help customers with a highly available solution set and our business cloud to give them all of the cloud goodness but keep their data on-premises.

    We also help I.T. Professional Shops who may not have the skill-set on board that have customers with a need for High Availability and a cloud like experience but want the solution deployed on-premises.

    For those customers that do want to cloud their business we have a solution set for the Small to Medium I.T. Shops that want to provide multi-tenant solutions in their own data centres. We provide the solution and backend support at a very reasonable cost while they spend their time selling their cloud.

    All in all, we've found ourselves a number of different great little niches for our highly available solutions (clusters) over the last few years.

    Thanks for reading! :)

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Web Site
    Our Cloud Service
    Twitter: @MPECSInc

    Friday, 3 November 2017

    A Little Plug for Mellanox and RoCE RDMA

    RoCE (RDMA over Converged Ethernet) via Mellanox NICs and switches is our primary fabric choice for Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS) to Hyper-V compute cluster fabric.

    With the Mellanox MSX1012X 10GbE switch we can deploy a pair of them along with a pair of ConnectX-4 Lx dual port NICs per node for about the same cost as a pair of NETGEAR XS716T 10GbE switches and a pair of Intel X540/X550-T2 10GbE RJ45 based NICs per node.

    We have a great business relationship with Mellanox. They are great folks to work with and their product support is second to none.

    I was honoured to be asked to use a portion of my presentation for MVPDays to create the following video that is resident on Mellanox's YouTube channel.

    Hopefully the video comes out okay as embedding it was a bit of a chore.

    Thanks for reading and have a great weekend!

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Cloud Service
    Twitter: @MPECSInc

    Wednesday, 1 November 2017

    Error Fix: Event 7034 Service Control Manager - Server, BITS, Task Scheduler, Windows Management Instrumentation, Shell Hardware Detection Crashes

    This has just recently started to pop up on networks we manage.

    All of the following are Event ID 7034 Service Control Manager service terminated messages:

    • The Windows Update service terminated unexpectedly. It has done this 3 time(s).
    • The Windows Management Instrumentation service terminated unexpectedly. It has done this 3 time(s).
    • The Shell Hardware Detection service terminated unexpectedly. It has done this 3 time(s).
    • The Remote Desktop Configuration service terminated unexpectedly. It has done this 3 time(s).
    • The Task Scheduler service terminated unexpectedly. It has done this 3 time(s).
    • The User Profile Service service terminated unexpectedly. It has done this 3 time(s).
    • The Server service terminated unexpectedly. It has done this 3 time(s).
    • The IP Helper service terminated unexpectedly. It has done this 2 time(s).
    • The Device Setup Manager service terminated unexpectedly. It has done this 3 time(s).
    • The Certificate Propagation service terminated unexpectedly. It has done this 2 time(s).
    • The Background Intelligent Transfer Service service terminated unexpectedly. It has done this 3 time(s).
    • The System Event Notification Service service terminated unexpectedly. It has done this 2 time(s).

    It turns out that all of the above are tied into SVCHost.exe and guess what:

    Log Name: Application
    Source: Application Error
    Date: 10/23/2017 5:09:57 PM
    Event ID: 1000
    Task Category: (100)
    Level: Error
    Keywords: Classic
    Computer: ABC-Server.domain.com
    Description:
    Faulting application name: svchost.exe_DsmSvc, version: 6.3.9600.16384, time stamp: 0x5215dfe3
    Faulting module name: DeviceDriverRetrievalClient.dll, version: 6.3.9600.16384, time stamp: 0x5215ece7
    Exception code: 0xc0000005
    Fault offset: 0x00000000000044d2
    Faulting process id: 0x138
    Faulting application start time: 0x01d34c5c3f589fe7
    Faulting application path: C:\Windows\system32\svchost.exe
    Faulting module path: C:\Windows\System32\DeviceDriverRetrievalClient.dll

    A contractor of ours that we deployed a greenfield AD and cluster for was the one who figured it out. WSUS and the Group Policy settings were deployed this last weekend with everything in our Cloud Stack running smoothly until then.

    The weird thing is, we have had these settings in place for years now without any issues.

    The following are the settings changed at both sites:

    System/Device Installation
    Specify search order for device driver source locations: Not Configured
    2014-02-11: Enabled by Philip Elder.
    2017-11-01: Not Configured by Philip Elder.
    Specify the search server for device driver updates: Not Configured
    2014-02-11: Enabled by Philip Elder.
    2017-11-01: Not Configured by Philip Elder.

    System/Driver Installation
    Turn off Windows Update device driver search prompt: Not Configured
    2017-10-28: Disabled by Philip Elder.
    2017-11-1: Returned to Not Configured by Philip Elder

    System/Internet Communication Management/Internet Communication settings
    Turn off Windows Update device driver searching: Not Configured
    2014-02-11: Disabled by Philip Elder.
    2017-11-01: Not Configured by Philip Elder.

    It is important to note that when working with Group Policy settings a comment should be made in each setting if at all possible. Then, when it comes to troubleshooting an errant behaviour that turns out to be Group Policy related we are better able to figure out where the setting is and when it was set. In some cases, a short description of the "Why" the setting was made helps.

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Cloud Service
    Twitter: @MPECSInc

    Tuesday, 31 October 2017

    Xeon Scalable Processor Motherboard CPU-Soft Lockup Fix

    The new Intel Purley based Intel Server Boards S2600WF, S2600BP, and S2600ST Product Family use a new BMC (Baseboard Management Controller) video subsystem.

    As a result, some operating systems, mostly *NIX based, will choke on install as they may not have the driver built-in.

    Intel Technical Advisory: Intel® Server Board S2600WF, S2600BP and S2600ST Product Family fail to initialize the operating system video driver for the ASPEED* Base Management Controller (BMC).

    That document point's to ASPEED's site for downloading an up to date driver that fixes the problem.

    Root Cause
    Full root cause of this issue has been determined. Intel has confirmed that the failure has no bearing on system performance, it only impacts local video graphics. In detail, when the operating system loads, the OS-embedded ASPEED* video driver is not able to access a portion of the BMC memory space, therefore the process stalls.

    On Windows Server based configurations we need to update the driver once the OS is installed. The default VGA driver that comes built-in to the OS works just fine.

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Cloud Service
    Twitter: @MPECSInc

    Thursday, 26 October 2017

    Fujitsu ScanSnap N1800: E-mail Button Greyed Out Fix

    We have moved a ScanSnap N1800 onto a new greenfield setup in a side-by-side migration we've been running.

    In this case, the Exchange server is on-premises with the appropriage Anonymous MFP Relay setup configured.

    Searching about turned up what turned out to be a simple fix though not one we would prefer: Enable a mailbox in Exchange for the scanner's account.

    Once we did that the e-mail button did indeed appear and work with subsequent scan and send tests being successful.

    Note that the account being used has a rediculously long password that never changes and is restricted on the domain. So, the attack surface is relatively small.

    Philip Elder
    Microsoft High Availability MVP
    MPECS Inc.
    Co-Author: SBS 2008 Blueprint Book
    Our Cloud Service
    Twitter: @MPECSInc