Tuesday, 31 March 2015

Our Default OU and Group Policy Structure

Over the years between our experiences with the Small Business Server Organizational Unit (OU) and Group Policy Object (GPO) structures plus wearing out a few copies of Jeremy Moskowitz’s books we’ve come to hone our Group Policy configurations down to an _almost_ science. ;)

Today, with our own Small Business Solution (SBS) in production we use the following OU and GPO structure as a starting point:

imageWe tailor all GPO settings around the intended recipient of those settings.

We use the WMI filters to delineate desktop OS versus Server and DC based operating systems. Note that the GPOs for those two sets of systems are not present in the above snip.

They would be:

  • Default Update Services Client Computers Policy
  • Default Update Services Server Computers Policy

Both enable Client-Side Targeting for WSUS managed updating.

NOTE: We _never_ edit the Default Domain Policy or the Default Domain Controllers Policy. EVER!

When we need something we create the GPO and link it to the OU containing the intended recipient objects or we scope the GPO via Security Group membership.

Some GP Pearls

All GPOs scoped to computers have the User Configuration settings disabled while GPOs scoped to users have the Computer Configuration settings disabled.

image

We don’t use Group Policy Loopback Processing. There’s just too much room for unintended consequences. Our structure above gives us the flexibility we need to hone our GPO settings down to a user or computer if need be.

Filters are OU membership, Security Group membership, or WMI Filtering.

GPO settings are like Cascading Style Sheets. Settings cascade from the domain level down through the OU structure to the recipient object. The closer the GPO to that object the more weight that GPO’s settings have.

We do not duplicate settings or put opposite settings in GPOs further down the OU structure. We create and link our GPOs accordingly.

We always enable the Group Policy Central Store (blog post) on our DCs. This makes things so much easier in the long run!

We always enable the AD Recycle Bin and if at all possible have the Forest and Domain at the latest available OS version level.

We test any intended changes we intend to make in a lab setting on a restored version of our client’s networks first! We test _any_ and _all_ intended settings changes/additions in a lab first!

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Thursday, 12 March 2015

Hyper-V: Broadcom Gigabit NICs and Virtual Machine Queues (VMQ)

Here is an explanation posted to the Expert’s Exchange forum that we believe needs a broader audience.

***

VMQ is a virtual networking structure allowing virtual Switch (vSwitch) networking to be processed by the various cores in a CPU. Without VMQ only one core would be processing those packets.

In a Gigabit setting the point is moot since the maximum of 100MB/Second or thereabouts per physical port is not going to tax any modern CPU core.

In a 10GbE setting where we have an abundance of throughput available to the virtual switch things change very quickly. We can then see a single core processing the entire virtual switch being a bottleneck.

In that setting, and beyond, VMQ starts tossing vSwitch processes out across the CPU's cores to distribute the load. Thus, we essentially eliminate the CPU core as a bottleneck source.

For whatever reason, Broadcom did not disable this setting in their 1Gb NIC drivers. As we understand things the specification for VMQ requires it to be disabled on 1GbE ports.

VMQ enabled on Broadcom NICs has caused no end of grief over the last number of years for countless Hyper-V admins. With Broadcom NICs one needs to disable Virtual Machine Queues (VMQ) on _all_ Broadcom Gigabit physical ports in a system to avoid what becomes a vSwitch traffic.

***

The above is a summary of conversations had with networking specialists. If there are any corrections or specifics that all y’all have about the VMQ structures please feel free to comment! :)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Tuesday, 10 March 2015

Cluster: Asymmetric or iSCSI SAN Storage Configuration and Performance Considerations

We When we set up a new asymmetric cluster, or if one is using an iSCSI SAN for central storage, the following is a guideline to how we would configure our storage.

Our configuration would be as follows:

  • JBOD or SAN Storage
    • 6TB of available storage
  • (2) Hyper-V Nodes
    • 256GB ECC RAM Each
    • 120GB DC S3500 Series Intel SSD RAID 1 for OS
    • Dual 6Gbsp SAS HBAs (JBOD) or Dual Intel X540T2 10GbE (iSCSI)

There are three key storage components we need to configure.

  1. Cluster Witness (non-CSV)
    • 1.5GB Storage
  2. Common Files (CSV 1)
    • Hyper-V Settings Files
    • VM Memory Files
    • 650GB Storage
  3. Our VHDX CSVs (balance of 5,492.5GB split 50/50)
    • CSV 2 at 2,746.25GB
    • CSV 3 at 2,746.25GB

Given that our two nodes have a sum total 512GB of RAM available to the VMs, though we’d be provisioning a maximum of 254GB of vRAM at best, we would set up our Common Files CSV with 650GB of available storage.

VHDX CSVs

We split up our storage for VHDX files into at least two Storage Spaces/LUNs. Each node would own one of the resulting CSVs.

We do this to split up the I/O between the two nodes. If we had just one 5.5TB CSV then all I/O for that CSV would be processed by just the owner node.

It becomes pretty obvious that having all I/O managed by just one of the nodes may present a bottleneck to overall storage performance. At the least, it leaves one node not carrying a share of the load.

Performance Considerations

Okay, we have our storage configured as above.

Now it’s time to set up our workloads.

  • VM 0: DC
  • VM 2: Exchange 2013
  • VM 3-6: RDHS Farm (Remote Desktop Services)
  • VM 7: SQL
  • VM 8: LoBs Line-of-Business apps), WSUS, File, and Print

Our highest IOPS load would be SQL followed by our two RDSH VMs and then our LoB VM. Exchange likes a lot more RAM than I/O.

When provisioning our VHDX files we would be careful to make sure our high IOPS VMs are distributed between the two CSVs as evenly as possible. This way we avoid sending most of our I/O through one node.

Why 650GB for Common Files?

Even though our VM memory files would take up about 254GB of that available storage one also needs space for the configuration files themselves, though they are quite small in size, and also additional space for those just-in-case moments.

One such moment is when an admin pulls the trigger on a snapshot/checkpoint. By default the differencing disk would be dropped into the Common Files storage location.

One would hope that monitoring software would throw up an alarm letting folks know that their cluster is going to go full-stop when that location runs out of space! But, sometimes that is _not_ the case so we need room to run our needed merge process to get things going again.

How do I know?

Okay, all of the above is just fine and dandy and begs the following question: How do I really know how the cluster will perform?

No one client’s environment is like another. So, we need to make sure we take performance baselines across their various workloads and make sure to talk to LoB vendors about their products and what they need to perform.

We have a standing policy to build out a proof-of-concept system prior to reselling that solution to our clients. As a result of both running baselines with various apps and building out our clusters ahead of time we now have a pretty good idea of what needs to be built into a cluster solution to meet our client’s needs.

That being said, we need to test our configurations thoroughly. Nothing could be worse than setting up a $95K cluster configuration that was promised to outperform the previous solution only to have that solution fall flat on its face. :(

Test. Test. Test. And, test again!

NOTE: We do _not_ deploy iSCSI solutions anywhere in our solution’s matrix. We are a direct attached storage (SAS based DAS) house. However, the configuration principles mentioned above apply for those deploying Hyper-V clusters on iSCSI based storage.

EDIT 2015-03-26: Okay, so fingers were engaged prior to brain on that first word! ;)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book