Monday, 13 February 2017

Installing Windows: Updating Drivers for Boot.WIM and Install.WIM

We have a number of clients that are going to be on Windows 7 Enterprise 64-bit for the foreseeable future.
Most, if not all, of the laptops we are deploying today require a few BIOS tweaks to turn off uEFI and SecureBoot prior to beginning the setup process.
Once that is done, the laptops we are deploying today have wired and wireless network, chipsets, and USB 3 ports that have no drivers in the Windows 7 Boot.WIM or Install.WIM files. So, we would be left stranded when trying to deploy an operating system (OS) via flash!
Given the amount of application updates most of our clients have we decided to avoid using MDT or other imaging software to deploy a new laptop. The time savings would be negligible since we’d be stuck running all of the application updates or new installs post OS deployment anyway.
Also, when installing an operating system via USB 3 flash drive with decent read speeds it only takes a few minutes to get through the base OS install. Windows 10 can have an entire OS install done in a few minutes.
The following instructions assume the files have already been extracted to a bootable flash drive to use for installing an OS on a new machine.
Here’s a simple step-by-step for updating the drivers for a Windows 7 Professional Install.WIM:
  1. Dism /Get-WimInfo /WimFile:D:\sources\install.wim
    • Where D: = the flash drive letter
  2. Dism /Mount-Wim /WimFile:D:\sources\install.wim /Name:"Windows 7 PROFESSIONAL" /MountDir:C:\Mount
    • Change C: to another drive/partition if required
    • NOTE: Do not browse the contents of this folder!
  3. Dism /Image:L:\mount /Add-Driver /Driver:C:\Mount_Driver /Recurse
    • Again, change C: to the required drive letter
    • We extract all drivers to be installed or updated to this folder
  4. Dism /unmount-Wim /mountdir:L:\mount /commit
    • This step will commit all of the changes to the .WIM file
    • NOTE: Make sure there are _no_ Windows/File Explorer, CMD, or PowerShell sessions sitting in the C:\Mount folder or the dismount will fail!
  5. Dism /unmount-Wim /mountdir:L:\mount /Discard
    1. Run this command to dismount the .WIM
The following is the process for updating the WinPE Boot.WIM:
  1. Dism /Get-WimInfo /WimFile:D:\sources\boot.wim
  2. Dism /Mount-Wim /WimFile:D:\sources\boot.wim /Name:"Microsoft Windows Setup (x64)" /MountDir:C:\Mount
  3. Dism /Image:C:\mount /Add-Driver /Driver:C:\Mount_Driver /Recurse
  4. Dism /unmount-Wim /mountdir:C:\mount /commit
The above process can be used to install the newest RAID driver into a Windows Server Boot.WIM and Install.WIM to facilitate a smoother install via flash drive.
Currently, we are using Kingston DTR3.0 G2 16GB flash drives as they have good read and decent write speeds. Please feel free to comment with your suggestions on reasonably priced 16GB and 32GB USB 3 flash drives that have good read and write speeds.
Good to us is ~75MB/Second read and ~35MB/Second write.
Thanks for reading! :)
Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Cloud Service

Saturday, 4 February 2017

Hyper-V Compute, Storage Spaces Storage, and S2D Hyper-Converged Solutions

Lately, we here at MPECS Inc. have been designing, implementing, servicing, and supporting highly available solutions for hyper-converged, Hyper-V compute, Storage Spaces storage, and lab environments along with standalone Hyper-V server solutions.

Here are some of the things we have been working on recently or have deployed within the last year.

Cluster Solutions

As has been posted here on our blog previously, we have invested heavily in the Windows Server 2016 story especially in Storage Spaces Direct (S2D) (S2D blog posts):

image

Proof-of-Concept (PoC) Storage Spaces Direct (S2D) Hyper-Converged or S2D SOFS cluster solution

The above Storage Spaces Direct PoC, based on the Intel Server System R2224WTTYSR, provides us with the necessary experience to deliver the hundreds of thousands of real every-day IOPS that our client solutions require. We can tailor our solution for a graphics firm to allow for multiple 10GbE data reads to their user’s systems or an engineering and architectural firm that requires high performance storage for their rendering farms.

Another Storage Spaces Direct PoC we are working on is the Kepler-47 (TechNet Blog Post):

image

Proof-of-Concept Storage Spaces Direct 2-Node for under $8K with storage!

Our goal for Kepler-47 is to deploy this solution into all clients that were normally deploying a single or dual Hyper-V server setup with Hyper-V Replica. Our recipe includes Intel 3700, 3600, and 3500 series SSDs, SuperMicro Mini-ITX Intel Xeon Processor E5-1200v5 series board, the 8-bay chassis, and Mellanox ConnectX-3 for direct connected RDMA East-West traffic. Cost for the 2-node cluster is about the same as one bigger Intel Server System that would run a client’s entire virtualization stack.

image

2016: Deployed 2 Node SOFS via Quanta JB4602 with ~400TB Storage Spaces Parity

In the late summer of 2016 we deployed the above SOFS cluster with the ultimate aim of adding three more JBODs for over 1.6PB (Petabytes) of very cost efficient Storage Spaces Parity storage for our client’s video and image files archive. The solution utilizes four 10GbE paths per node and SMB Multichannel to provide robust access to the files on the cluster. Six HGST SAS SSDs provide the needed high-speed cache for writes to the cluster.

Our smallest cluster client is a 15 seat accounting firm with a 2 node clustered Storage Spaces and Hyper-V cluster (our blog post). Some of our largest clients are SME hosting companies with hundreds of tenants and VMs running on SOFS storage and Hyper-V compute clusters.

We can deploy highly available solutions for $6K in hardware and up rendering standalone Hyper-V or VMware solutions moot!

Server and Storage Hardware

We primarily utilize Intel Server Systems and Storage as Intel’s support is second to none and our solution price points become way more than competitive to equivalent Tier 1 solutions. When required, we utilize Dell Server Systems and Storage for solutions that require a 4-hour on-site warranty over 3 to 5 years or more.

Our primary go-to for disaggregated SOFS cluster storage (Storage Nodes + Direct Attached Storage JBOD(s)) are Quanta QCT JBODs and DataON Storage JBODs. We’ve had great success with both company’s storage products.

For drives we deploy Intel NVMe PCIe and 2.5”, Intel SATA SSDs, and HGST SAS SSDs. We advise being very aware of each JBOD vendor’s Hardware Compatibility List (HCL) before jumping on just any SAS SSD listed in the Windows Server Catalog Storage Spaces Approved list (Microsoft Windows Server Catalog Site for Storage Spaces).

Important: Utilizing just any vendor’s drive in a Storage Spaces or Storage Spaces Direct setting can be a _costly_ error! One needs to do a lot of homework before deploying any solution into production. BTDT (Been There Done That)

The spinning media we use depends on the hyper-converged or storage solution we are deploying and the results of our thorough testing.

Network Fabrics

In a Storage Spaces Direct (S2D) setting our East-West (node to node) fabric is 10Gb to 100Gb Mellanox with RDMA via RoCEv1 and RoCEv2 (RDMA over Converged Ethernet) (our blog post) depending on the Mellanox NIC ConnectX version. We also turn to RoCE for North-South (Compute to Storage) for our disaggregated cluster solutions.

For 10GbE starter solutions for both storage and compute network the NETGEAR XS716T is the go-to switch. We always deploy either the storage to compute or the workload Hyper-V virtual switch in pairs to provide network resilience. Their switches are very well priced for the entry-level to mid-level solutions we deploy.

Cluster Lab

It’s no secret that we invest a lot in our client solution labs and network shadow solutions (our blog post). It is a point of principle that we make sure our solutions work as promised _before_ we would even consider selling them to our clients.

One does not need to look far for five figure, six figure, seven figure, or more solution failures. Recent catastrophic failures at the Australian Tax Office (Bing search) or 123-reg (Bing search) come to mind. It’s not difficult to find stories of very expensive solutions failing to deliver on their big promises with a big price tag.

The onus is on us to make sure we can under promise and over deliver on every solution!

Our Solutions

We can deliver a wide variety of solutions with the following being a partial list.

  • Storage Spaces Direct (S2D)
    • 2 nodes to 16 Nodes
    • Hyper-Converged running both compute and storage
    • SOFS mode to provide high IOPS storage
    • Hardware agnostic solution sets
    • Host UPDs in Azure, on VMware, or our solutions
  • Scale-Out File Server Clusters (2 nodes to 5 1 JBOD or more) 
    • IOPS tuned for intended workload performance
    • Large Volume backup and archival storage
    • Multiple Enclosure Resilience for additional redundancy
  • Hyper-V Compute Clusters (2 nodes to 64)
    • Tuned to workload type
    • Tuned for workload density
  • Clustered Storage Spaces (2 nodes + 1 JBOD)
    • Our entry-level go-to for small to medium business
    • Kepler-47 fits in this space too
  • RDMA RoCE via Mellanox Ethernet Fabrics for Storage <—> Compute
    • We can deploy 10Gb to 100Gb of RDMA fabric
    • High-Performance storage to compute
    • Hyper-converged East-West fabrics
  • Shadow Lab for production environments
    • Test those patches or application updates on a shadow lab
  • Learning Lab
    • Our lab solutions are very inexpensive
    • Four node S2D cluster that fits into a carry-on
    • Can include an hour or more for direct one-on-one or small-group learning
      • Save _lots_ of time sifting through all the chaff to build that first cluster
  • Standalone Hyper-V Servers
    • We can tailor and deliver standalone Hyper-V servers
    • Hyper-V Replica setups to provide some resilience
    • Ready to greenfield deploy, migrate from existing, or side-by-side migrate

Our solutions arrive at our client’s door ready to deploy production or lab workloads. Just ask us!

Or, if you need help with an existing setup we’re here. Please feel free to reach out.

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book