Monday, 15 May 2017

WannaCry Mitigation plus Windows XP and Server 2003 Patch

By now most of the world has heard about the WannaCry malware put together from purported NSA exploit "tools".

The simplest thing to do is to disable or remove SMBv1 on our networks: How to enable and disable SMBv1, SMBv2, and SMBv3 in Windows and Windows Server (Microsoft Support).

Dealing with SMBv1

On Windows 7:

First, we need the following put into a text file:

sc.exe config lanmanworkstation depend= bowser/mrxsmb20/nsi
sc.exe config mrxsmb10 start= disabled
pause
shutdown -r -t 0 -f

image

In Notepad click File then Save As and name exactly as follows:

"Windows7 SMBv1 DISABLE.BAT"

image

NOTE: The quotes " are necessary

Right click on the resulting BATCH file and Run As Administrator:

image

An administrator's username and password will be required for this step. A local admin or domain account would work.

A status window will show:

image

NOTE: Windows 7 should show SUCCESS for both steps

As the message says, press any key to continue.

NOTE: The script automatically reboots the machine so make sure users save and close before running.

On Windows 10:

  1. Click Start and type PowerShell
  2. Right click on the result and Run as Administrator
  3. Remove-WindowsOptionalFeature –Online –FeatureName SMB1Protocol
    • You should see:
    •      image

That fully removes the problematic component in Windows.

Windows Server

Open an elevated PowerShell window:

Remove-WindowsFeature –Name FS-SMB1

image

Backup & Restore

For users that almost exclusively work from their computer over server or cloud based resources with no local backup it's important that they back up their machines daily! They should have at least three 2.5" USB3 fast disk drives in rotation.

We use ShadowProtect Desktop by StorageCraft to back up our client's endpoints.

A critical component in the backup regime is an air-gap. Just as it is for the entire organization's server infrastructure.

Windows XP and Server 2003

Get the Security Updates ASAP and install them!

The files may be able to be set up to be delivered via your favourite patching mechanism. Please check that out to get these patches out to as many systems as is possible.

Windows Firewall

One mitigation step would be to set up a Group Policy object that denies File & Print (445) Inbound from any system but necessary such as servers and/or domain controllers.

Malware Mitigation

As always, the best form of mitigation is a well trained user. Patch and train the human is the best methodology going.

A a small plug, our xD mail sanitation and continuity service flags and renders inert links that say one thing but point to another location. This has put link shortening services like Bit.Ly at a disadvantage but we're willing to pay that price to keep our users sage. Just ask us how!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service

Thursday, 27 April 2017

Surface Pro 4: Creator's Update Graphics Driver Issue

As an FYI, after updating to the Creator's Update Windows 10 version the graphics subsystems on the Surface Pro 4 seems to start behaving badly. This is especially true if connected to external monitors via a Surface Dock (Gen1 or Gen2).

An updated driver can be obtained here: Intel Iris 540 Driver for Windows 10

The SP4 had 15.44 while the download is 15.45 as of this writing!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service

Wednesday, 15 March 2017

Windows Server 2016 March 2017 Update: Full & Delta Available

We can download both the full March Cumulative Update or there is now a Delta available.

image

Delta Update Windows Server 2016

The update is quite critical for those of us that run clusters on Windows Server 2016.

  • Addresses issue which could cause ReFS metadata corruption
  • Several fixes for Enable-ClusterS2D cmdlet for setting up Storage Spaces Direct
  • Addresses issue with Update-ClusterFunctionalLevel cmdlet during rolling upgrades if any of the default resource types are not registered
  • Optimization to the ordering when draining a S2D node with Storage Maintenance Mode
  • Addresses servicing issue where the Cluster Service may not start automatically on the first reboot after applying an update
  • Improved the bandwidth of SSD/NVMe drives available to application workloads during S2D rebuild operations.
  • Addresses issue with all flash S2D systems with cache devices where there was unnecessary read data from both tiers that would degrade performance

A full list is here: March 14, 2017—KB4013429 (OS Build 14393.953)

The Delta Update can be used to update our .WIM files for our Windows Server 2016 flash drive based installers (Blog post How-To).

Note that the last Cumulative Update took a good hour to run on our VMs and nodes. This one is sounding like it may take as long or longer depending on whether the Delta of full Cumulative Update gets installed.

Here’s a direct link to the Microsoft Update Page for KB4013429.

Happy Patching! ;)

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service

Monday, 13 February 2017

Installing Windows: Updating Drivers for Boot.WIM and Install.WIM

We have a number of clients that are going to be on Windows 7 Enterprise 64-bit for the foreseeable future.

Most, if not all, of the laptops we are deploying today require a few BIOS tweaks to turn off uEFI and SecureBoot prior to beginning the setup process.

Once that is done, the laptops we are deploying today have wired and wireless network, chipsets, and USB 3 ports that have no drivers in the Windows 7 Boot.WIM or Install.WIM files. So, we would be left stranded when trying to deploy an operating system (OS) via flash!

Given the amount of application updates most of our clients have we decided to avoid using MDT or other imaging software to deploy a new laptop. The time savings would be negligible since we’d be stuck running all of the application updates or new installs post OS deployment anyway.

Also, when installing an operating system via USB 3 flash drive with decent read speeds it only takes a few minutes to get through the base OS install. Windows 10 can have an entire OS install done in a few minutes.

The following instructions assume the files have already been extracted to a bootable flash drive to use for installing an OS on a new machine.

Here’s a simple step-by-step for updating the drivers for a Windows 7 Professional Install.WIM:

  1. Dism /Get-WimInfo /WimFile:D:\sources\install.wim
    • Where D: = the flash drive letter
  2. Dism /Mount-Wim /WimFile:D:\sources\install.wim /Name:"Windows 7 PROFESSIONAL" /MountDir:C:\Mount
    • Change C: to another drive/partition if required
    • NOTE: Do not browse the contents of this folder!
  3. Dism /Image:L:\mount /Add-Driver /Driver:C:\Mount_Driver /Recurse
    • Again, change C: to the required drive letter
    • We extract all drivers to be installed or updated to this folder
  4. Dism /unmount-Wim /mountdir:L:\mount /commit
    • This step will commit all of the changes to the .WIM file
    • NOTE: Make sure there are _no_ Windows/File Explorer, CMD, or PowerShell sessions sitting in the C:\Mount folder or the dismount will fail!
  5. Dism /unmount-Wim /mountdir:L:\mount /Discard
    1. Run this command to dismount the .WIM

The following is the process for updating the WinPE Boot.WIM:

  1. Dism /Get-WimInfo /WimFile:D:\sources\boot.wim
  2. Dism /Mount-Wim /WimFile:D:\sources\boot.wim /Name:"Microsoft Windows Setup (x64)" /MountDir:C:\Mount
  3. Dism /Image:C:\mount /Add-Driver /Driver:C:\Mount_Driver /Recurse
  4. Dism /unmount-Wim /mountdir:C:\mount /commit

The above process can be used to install the newest RAID driver into a Windows Server Boot.WIM and Install.WIM to facilitate a smoother install via flash drive.

Currently, we are using Kingston DTR3.0 G2 16GB flash drives as they have good read and decent write speeds. Please feel free to comment with your suggestions on reasonably priced 16GB and 32GB USB 3 flash drives that have good read and write speeds.

Good to us is ~75MB/Second read and ~35MB/Second write.

Thanks for reading! :)

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service

Saturday, 4 February 2017

Hyper-V Compute, Storage Spaces Storage, and S2D Hyper-Converged Solutions

Lately, we here at MPECS Inc. have been designing, implementing, servicing, and supporting highly available solutions for hyper-converged, Hyper-V compute, Storage Spaces storage, and lab environments along with standalone Hyper-V server solutions.

Here are some of the things we have been working on recently or have deployed within the last year.

Cluster Solutions

As has been posted here on our blog previously, we have invested heavily in the Windows Server 2016 story especially in Storage Spaces Direct (S2D) (S2D blog posts):

image

Proof-of-Concept (PoC) Storage Spaces Direct (S2D) Hyper-Converged or S2D SOFS cluster solution

The above Storage Spaces Direct PoC, based on the Intel Server System R2224WTTYSR, provides us with the necessary experience to deliver the hundreds of thousands of real every-day IOPS that our client solutions require. We can tailor our solution for a graphics firm to allow for multiple 10GbE data reads to their user’s systems or an engineering and architectural firm that requires high performance storage for their rendering farms.

Another Storage Spaces Direct PoC we are working on is the Kepler-47 (TechNet Blog Post):

image

Proof-of-Concept Storage Spaces Direct 2-Node for under $8K with storage!

Our goal for Kepler-47 is to deploy this solution into all clients that were normally deploying a single or dual Hyper-V server setup with Hyper-V Replica. Our recipe includes Intel 3700, 3600, and 3500 series SSDs, SuperMicro Mini-ITX Intel Xeon Processor E5-1200v5 series board, the 8-bay chassis, and Mellanox ConnectX-3 for direct connected RDMA East-West traffic. Cost for the 2-node cluster is about the same as one bigger Intel Server System that would run a client’s entire virtualization stack.

image

2016: Deployed 2 Node SOFS via Quanta JB4602 with ~400TB Storage Spaces Parity

In the late summer of 2016 we deployed the above SOFS cluster with the ultimate aim of adding three more JBODs for over 1.6PB (Petabytes) of very cost efficient Storage Spaces Parity storage for our client’s video and image files archive. The solution utilizes four 10GbE paths per node and SMB Multichannel to provide robust access to the files on the cluster. Six HGST SAS SSDs provide the needed high-speed cache for writes to the cluster.

Our smallest cluster client is a 15 seat accounting firm with a 2 node clustered Storage Spaces and Hyper-V cluster (our blog post). Some of our largest clients are SME hosting companies with hundreds of tenants and VMs running on SOFS storage and Hyper-V compute clusters.

We can deploy highly available solutions for $6K in hardware and up rendering standalone Hyper-V or VMware solutions moot!

Server and Storage Hardware

We primarily utilize Intel Server Systems and Storage as Intel’s support is second to none and our solution price points become way more than competitive to equivalent Tier 1 solutions. When required, we utilize Dell Server Systems and Storage for solutions that require a 4-hour on-site warranty over 3 to 5 years or more.

Our primary go-to for disaggregated SOFS cluster storage (Storage Nodes + Direct Attached Storage JBOD(s)) are Quanta QCT JBODs and DataON Storage JBODs. We’ve had great success with both company’s storage products.

For drives we deploy Intel NVMe PCIe and 2.5”, Intel SATA SSDs, and HGST SAS SSDs. We advise being very aware of each JBOD vendor’s Hardware Compatibility List (HCL) before jumping on just any SAS SSD listed in the Windows Server Catalog Storage Spaces Approved list (Microsoft Windows Server Catalog Site for Storage Spaces).

Important: Utilizing just any vendor’s drive in a Storage Spaces or Storage Spaces Direct setting can be a _costly_ error! One needs to do a lot of homework before deploying any solution into production. BTDT (Been There Done That)

The spinning media we use depends on the hyper-converged or storage solution we are deploying and the results of our thorough testing.

Network Fabrics

In a Storage Spaces Direct (S2D) setting our East-West (node to node) fabric is 10Gb to 100Gb Mellanox with RDMA via RoCEv1 and RoCEv2 (RDMA over Converged Ethernet) (our blog post) depending on the Mellanox NIC ConnectX version. We also turn to RoCE for North-South (Compute to Storage) for our disaggregated cluster solutions.

For 10GbE starter solutions for both storage and compute network the NETGEAR XS716T is the go-to switch. We always deploy either the storage to compute or the workload Hyper-V virtual switch in pairs to provide network resilience. Their switches are very well priced for the entry-level to mid-level solutions we deploy.

Cluster Lab

It’s no secret that we invest a lot in our client solution labs and network shadow solutions (our blog post). It is a point of principle that we make sure our solutions work as promised _before_ we would even consider selling them to our clients.

One does not need to look far for five figure, six figure, seven figure, or more solution failures. Recent catastrophic failures at the Australian Tax Office (Bing search) or 123-reg (Bing search) come to mind. It’s not difficult to find stories of very expensive solutions failing to deliver on their big promises with a big price tag.

The onus is on us to make sure we can under promise and over deliver on every solution!

Our Solutions

We can deliver a wide variety of solutions with the following being a partial list.

  • Storage Spaces Direct (S2D)
    • 2 nodes to 16 Nodes
    • Hyper-Converged running both compute and storage
    • SOFS mode to provide high IOPS storage
    • Hardware agnostic solution sets
    • Host UPDs in Azure, on VMware, or our solutions
  • Scale-Out File Server Clusters (2 nodes to 5 1 JBOD or more) 
    • IOPS tuned for intended workload performance
    • Large Volume backup and archival storage
    • Multiple Enclosure Resilience for additional redundancy
  • Hyper-V Compute Clusters (2 nodes to 64)
    • Tuned to workload type
    • Tuned for workload density
  • Clustered Storage Spaces (2 nodes + 1 JBOD)
    • Our entry-level go-to for small to medium business
    • Kepler-47 fits in this space too
  • RDMA RoCE via Mellanox Ethernet Fabrics for Storage <—> Compute
    • We can deploy 10Gb to 100Gb of RDMA fabric
    • High-Performance storage to compute
    • Hyper-converged East-West fabrics
  • Shadow Lab for production environments
    • Test those patches or application updates on a shadow lab
  • Learning Lab
    • Our lab solutions are very inexpensive
    • Four node S2D cluster that fits into a carry-on
    • Can include an hour or more for direct one-on-one or small-group learning
      • Save _lots_ of time sifting through all the chaff to build that first cluster
  • Standalone Hyper-V Servers
    • We can tailor and deliver standalone Hyper-V servers
    • Hyper-V Replica setups to provide some resilience
    • Ready to greenfield deploy, migrate from existing, or side-by-side migrate

Our solutions arrive at our client’s door ready to deploy production or lab workloads. Just ask us!

Or, if you need help with an existing setup we’re here. Please feel free to reach out.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Tuesday, 10 January 2017

Server 2016 January 10 Update: KB3213986 – Cluster Service May Not Start Automatically Post Reboot

The January 10, 2017 update package (KB3213986) has a _huge_ caveat for those updating clusters especially with Cluster Aware Updating:

Known issues in this update:

Symptom
The Cluster Service may not start automatically on the first reboot after applying the update.

Workaround
Workaround is to either start the Cluster Service with the Start-ClusterNode PowerShell cmdlet or to reboot the node.

For those managing large cluster deploys this situation definitely leads to a need to evaluate the update procedure for this particular update.

Please keep this in mind when scheduling this particular update and have update resources set up to mitigate the problem.

Note that as of this writing, the cluster service stall on reboot is a one-time deal as far as we know. Meaning, once the update has been completed and the node has successfully joined the cluster there should be no further issues.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service