Friday 13 November 2015

Some I.T. Professional Business Pearls

Here are some thoughts on the many aspects of running an I.T. business in today’s world that have been garnered over my 13 years in business so far.

  • Never reveal business operations
    • Financial
    • Client’s names (don’t ever reveal clients supported to others)
    • Projects, Purchases, or other such products and services WE provide
  • Never volunteer _any_ information
  • Always keep scheduling information time related
    • I’m running behind, I’ll call in 15
    • I’m working on something that went sideways, let’s delay to tomorrow please
    • Always call, text, reach out in-person versus e-mail
      • If calling follow-up with an e-mail “thanks for allowing us to delay by a day” or some such note
    • We keep a priority band that our clients are aware of
      • Priority 1: Business Critical Outage
      • Priority 2: User down or problematic app
      • Priority 3: App updates, changes and such
      • Priority 4: All of the other stuff
  • Never, ever, give a customer poop for leaving
    • I learned this one the hard way
    • Business is business no matter how the termination was handled by the now former client
      • Never, ever, take things personally
    • Respect their decision and acquiesce with grace and integrity
    • Be silent
    • Resist the urge to be defensive - shut this one right down
    • Cooperate with the next I.T. company if need be
      • Give over the keys to the kingdom with ease
  • Always, and everywhere, do everything in writing
    • “Yes, Ms. Customer, I’d be more than happy to help you do X and it’ll be done on Y” via e-mail after a conversation
    • Always confirm project add-ons and scope creep with an e-mail indicating back charges and extras as they fall out of scope
    • Keep an extensive set of audit notes for each client
    • Keep an extensive change log for all clients
    • image Snip everything, name them accordingly, and keep them forever
    • Be disciplined and document everything
  • Use a time keeper
    • Outlook Tasks or CRM with due dates and reminders
    • OneNote notebook with Surface 4/Pro and pen close at hand
    • Note keeper pocket notebook and mechanical pencils
    • Write all requests down and transfer what needs to be to Outlook or CRM

The above is the culmination of my 13 years running our I.T. company. I hope it helps! :)

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Thursday 12 November 2015

Exchange Stall: Purging Exchange Logs

We’ve got our SBS (Small Business Solution) set up for client’s all-in-the-Cloud experience.

What we’re finding is that some vendor’s systems don’t trigger VSS within a VM running Exchange thus leaving Exchange thinking it is not being backed up. This means that the Exchange VM eventually stalls due to no space on the partition hosting the logs.

To remedy that run the following in a script say once a week, or more depending on the volume of mail, on the Exchange VM:

  1. Elevated CMD
  2. DiskShadow
  3. Add Volume C:
  4. Begin Backup
  5. Create
  6. End Backup 

Once the snapshot completes Exchange will think it’s been backed up and consolidate all of the logs.

NOTE: Make sure the VM/Server is backed up!

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Wednesday 21 October 2015

Folder Permissions: How To Properly Disinherit Permissions

We run into a lot of ACL corruption issues and access issues when a folder has not been disinherited properly.

The following is the best method for disinheriting permissions a folder receives from its parent:

  1. Right click on the folder and click Properties
  2. Click the Advanced button
  3. Click the Change Permissions button if required
  4. Click the Disable inheritance button
  5. Click the Convert inherited permissions into explicit permissions on this object.
    • image
  6. Click on DOMAIN\Domain Users or MACHINE\Users and then the Remove button
    • This removes access to that folder to all domain users
  7. Add the necessary security groups and give them MOD
  8. OPTION: On existing folder sets one can click Replace all child object permission entries with inheritable permission entries from this object
    • Does one want to click this? If there are customized permissions _below_ the folder being disinherited those permissions would be lost.
  9. Click Apply and OK.

From there our folder would now have the necessary permissions for users in the specific security group(s) to make changes.

We enable Access-based Enumeration on _all_ shares we deploy by default. This means that users that are not in the above assigned security group(s) will not see the folder in their File Explorer.

One of the warning signs that the above process was not followed will be for domain admin or local admin accounts to get a UAC prompt when navigating the physical folder set.

As a rule we follow a trunk –> branch –> leaf structure for our folders. All users have a single point of entry with some subfolders having their inheritance blocked.

From there we prefer to _not_ disinherit any further down-level folders unless absolutely necessary because that inevitably leads to access issues and/or permissions corruption.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Friday 16 October 2015

E-mail NDR: #5.1.1 SMTP; 550 Rejecting for Sender Policy Framework (SPF too many lookups)

One of our clients was having issues sending an e-mail to one of our regional ISP’s e-mail servers.

  • Remote Server returned ‘< #5.1.1 SMTP; 550 Rejecting for Sender Policy Framework>’
    • SPF too many lookups

There was no further information. The specifics came from a very helpful mail technician at the ISP.

So, we started to dig around and came up with the following:

A ticket went through Third Tier’s Help Desk this week that was based on this problem with Dave Shakelford pointing to the following blog post:

So, what does all of this mean?

It means that we need to make sure all of our clients that send mail via an ISP SMTP server, third party sanitation and continuity service, or mail hosting service need to have a correct SPF record in place.

As the JangoMail blog post makes clear, we may have to jump through a few hoops to get it right, but get it right we must as our client’s mail is critical to their business.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 29 September 2015

A 2 Node Hyper-V Cluster with Clustered Storage Spaces

We are in the process of finishing up a client’s migration from clustered SBS 2011 Standard to our SBS (Small Business Solution) stack solution and the following cluster configuration: image The above setup is as follows in order of appearance top to bottom:
  • 1U Intel Xeon E3 series server running as PDCe, ISO storage, and other non-essential roles
  • 1U single socket Intel Xeon E5 series Hyper-V node
    • On-Board i350T4 plus add-in i350T4
    • Dual 6Gbps Intel/LSI SAS HBAs
  • 1U single socket Intel Xeon E5 series Hyper-V node
    • On-Board i350T4 plus add-in i350T4
    • Dual 6Gbps Intel /LSI SAS HBAs
  • 2U DataON DNS-1640d JBOD
    • Connected via dual 6Gbps SAS cables per node
Operating systems across the board for all physical and virtual servers is Windows Server 2012 R2.
Storage sharing and arbitration is handled by clustered Storage Spaces. The above setup has 1.2TB 10K HGST SAS drives (DataON HCL approved) set up in a Storage Spaces 3-way mirror with the standard Space having 2 columns.
The client we deployed this cluster into had a cluster already in place based on Windows Server 2008 R2. They are all of 15-18 seats and value the uptime insurance a cluster gives them as downtime for them is expensive.
Note that the cost of this particular setup based on Intel Server Systems and the DataON JBOD is very reasonable.
Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Monday 28 September 2015

Warning: Sage 50 2016 Server Manager will Reboot Without Prompting

We were installing the database manager on the backend server for a client.

It started the .NET 4.5.2 FULL installer then suddenly disappeared after a few minutes.

So, keep in mind that this one will need to be done when no one is accessing that backend server! Or for that matter that the user that we are working on setting up or updating will have their machine reboot spontaneously.

Also, Sage released this version as an “update”. We received quite a few calls from our accounting firms when they could no longer connect to their Sage/Simply data on the server.

In the end, it turned out that no prompt was given to the user that the “update” was actually an “upgrade”. 

Yo, Sage! A little warning would be appropriate please.

EDIT: Updated a bit for specifics.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Thursday 23 July 2015

User Profile Tip: Windows Explorer Favorites

Some of us are big on redirecting most folders to the server.

Some of us have learned the hard way to leave AppData and its contents alone on the local desktop. ;)

By default we redirect Desktop, My Documents/Documents, and Favorites.

We recently did a profile refresh using our Event ID 1511, 1515 Profile loss to TEMP method for one of our clients. Their profile had become hopelessly corrupted.

Now, to date we’ve not encountered too many folks that avidly use the Windows Explorer Favorites (pinning):


The above is a snip of my own Windows Explorer pins.

Okay, so we don’t redirect that folder and we’ve not really had to migrate those links before.

That begged the question: Where the chicken are they?!?

Our search foo both on the local machine, via AppData where we thought they should be, and via Internet turned up nothing but one clue: %UserProfile%\Links.



We copied the files from the UserProfile–OLD folder into their new UserProfile\Links folder and they were happy to have them back.

We’ve since added this step to our checklist and will pay a bit more attention to our client’s environments to see if we need to redirect %UserProfile%\Links to save on some time later on.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Thursday 16 July 2015

A Brief on Storage Spaces

This is a repost of a comment made on an Experts Exchange question.
Storage Spaces (SS) is pretty unique. It's driven by Microsoft's need to run data centres full of storage but not foot the bill for huge SAN arrays.
There is a Windows Server Catalogue page of approved hardware for a SS solution.
Our preference is for Quanta and DataON for non-Tier 1 solutions. We've done a lot of testing, and have deployed to on-premises and data centre based clients, prior to being confident in our solution set. That get's expensive _fast_ as we do not deploy anything we've not tested first.
As can be seen, Dell has their products on the SS list and Microsoft chose them for their Cloud Platform System.
A lot of planning and foresight has gone into SS especially with the upcoming Windows Server 2016 feature set (should they all make it into RTM). There are a lot of big storage vendor aimed features that will allow us to deploy SS solutions sets at a fraction of the $/GB cost of the big box vendors.
As an FYI we have a solution set for IaaS vendors that has been in production for close to two years now that works flawlessly. Backend is 10GbE to start with 40Gb SMB Direct (RDMA) and 56Gb SMB Direct (RDMA) over InfiniBand as options.
The solution can scale from 60 drives (4TB, 6TB, or 8GB NearLine SAS) in one 60-bay JBOD to three JBODs, four, or more. With three or more we get enclosure resilience. A full enclosure of drives can fail and SS keeps moving along until that enclosure is brought back up or replaced.
Storage Spaces' cost per IOP, cost per GB/Second (throughput), and cost per GB can't be compared.
Check out Storage Spaces Direct (S2D). Our v2 data centre product will be based on S2D with an all flash option providing _millions_ of IOPS to tenants. Storage fabric via RoCE while storage to compute would be RDMA via InfiniBand.
Further Reading:
There is a very strong economic motivation to get Storage Spaces right for Microsoft and for us. We’ve staked our company direction on Microsoft’s direction with storage while Microsoft’s driver is reducing the overall cost of storage in their Azure data centres.
We believe our Cloud Services Provider data centre backend products are some of the best available and Storage Spaces is a critical piece of the puzzle.
Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 14 July 2015

Third Tier: Be the Cloud – Client Facing Environment Demo & Chat

Tomorrow at 1700MST is my regular monthly Third Tier chat.

Since we’ve spent a fair amount of time on our Be the Cloud product (IaaS to your clients) backend setup, now it’s time to have a look at what the BtC clients would be working with.

The client environment is based on our SBS (Small Business Solution) that provides a Small Business Server like IT experience but greatly improved.

  • Remote Web Portal access
  • Remote desktop access
  • Exchange services (OWA, EAS, OA, Public Folders, ETC)
  • RemoteApp based application access
  • SharePoint CMS for Check Out/In and Versioning in Documentation

Our goal in designing our SBS was to give our clients as close to a Small Business Server experience as possible.

I believe we’ve done that in spades and after our chat tomorrow I believe you’ll agree. :)

When: 1700MST

Where: Third Tier

A recording of the chat, barring any technical difficulties, would be posted on the Third Tier blog at a later date.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Friday 10 July 2015

WebDAV Download Error: 0x800700DF File Size Exceeds Limit Allowed

We set up a WebDAV based file repository for some of our Cloud deployments.

When we did so we hit the following:


An unexpected error is keeping you from copying the file. If you continue to receive this error, you can use the error code to search for help with this problem.

Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved.

Search results indeed bring up a plethora of results all with the same fix:

  1. Regedit
  2. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
    • image
  3. FileSizeLimitInBytes
    1. Set to the following Decimal: 4294967295
    2. Click OK
  4. Restart the WebClient service
  5. Refresh the Windows Explorer WebDAV window
  6. Authenticate again
  7. Voila!
    • image

We’re now in business.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Monday 6 July 2015

Happy Monday Because …

Kittens! :)


Momma’s Momma was a Blue Point Siamese with an unknown Papa. In this case the kitten’s Papa is our male black and white cat “Two Face”.


The black and grey tabby and the calico are females while the fully black and white one, like the Papa, and the cream coloured one are males.

There are many things in life that can bring a smile to our face. This is most certainly one of them.

Happy Monday everyone and thank you for reading! :D

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 30 June 2015

Hyper-V Virtualization 101: Hardware Considerations

When we deploy a Hyper-V virtualization solution we look at:
  1. VM IOPS Requirements
  2. VM vRAM Requirements
  3. VM vCPU Requirements
In that order.

Disk Subsystem

The disk subsystem tends to be the first bottleneck.
For a solution with dual E5-2600 series CPUs and 128GB of RAM requiring 16 VMs or thereabouts we'd be at 16 to 24 10K SAS drives at the minimum for this setup with a 1GB hardware RAID controller (non-volatile or battery backed cache).
RAID 6 is our go-to for array configuration.
Depending on workloads one can look at Intel's DC S3500 series SSDs or the higher endurance DC S3700 series models to get more IOPS out of the disk subsystem.


Keep in mind that the physical RAM is split between the two processors so one needs to be mindful of how the vRAM is divvied up between the VMs.
Too much vRAM on one or two VMs can cause the physical RAM to be juggled between the two physical CPUs (NUMA).
Note that each VM’s vRAM gets a file written to disk. So, if we are allocating 125GB of vRAM to the VMs there will be 125GB of files on disk.


And finally, each vCPU within a VM represents a thread to the physical CPU. For VMs with multiple vCPUs every thread (vCPU) for that VM needs to be processed by the CPU's pipeline in parallel. So, the more vCPUs we assign to a VM the more the CPU's logic needs to juggle the threads to have them processed.
The end result? More vCPUs is not always better.
I have an Experts Exchange article on Some Hyper-V Hardware and Software Best Practices that should be of some assistance too. In it I speak about the need to tweak the BIOS settings on the server, hardware configurations to eliminate single point of failures (SPFs), and more.


In the end, it is up to us to make sure we test out our configurations before we deploy them. Having a high five figure SAN installed to solve certain performance “issues” only to find out they exist _after_ the fact can be a very bad place to be in.
We test all aspects of a standalone and clustered system to discover its strengths and weaknesses. While this can be a very expensive policy, to date we’ve not had one performance issue with our deployments.
Our testing can also be quite beneficial to present an IOPS and throughput reports based on sixteen different allocation sizes (hardware and software) to our client _and_ the vendor complaining about our system. ;)
Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Wednesday 17 June 2015

What's up, what's been happing, and what will be happening.

Wow, it's been a while hasn't it? :)

We've been _very_ busy with our business as well as a Cloud services start-up and Third Tier is keeping me hopping too.

I have a regular monthly Webinar via Third Tier where we've been spending time on the Third Tier product called "Be the Cloud". It is a solution set developed to provide a highly available backend for client facing services based on our SBS (Small Business Solution).

We, that is my family, took a much needed break in May for a couple of weeks of downtime as we'd not had any pause for a good 18 months prior. We were ready for that.

So, why the blogging pause?

There are a number of reasons.

One is that I've been so busy researching and working on new things that there hasn't been a lot of time left over for writing them all out. Ongoing client needs are obviously a part of that too.

Another had to do with waiting until we were okay to publish information on the upcoming Windows Server release. We Cluster MVPs, and others, were privileged to be very deeply involved with the early stages of the new product. But, we were required to remain mum. So, instead of risking anything I decided to hold off on publishing anything Server vNext related.

Plus, we really didn't have a lot of new content to post since we've about covered the gamut in Windows Server 2012 RTM/R2 and Windows Desktop. Things have been stable on that front other than a few patch related bumps in the road. So, nothing new there meant nothing new to write about. ;)

And finally, the old grey matter just needed a break. After all, I've been writing on this blog since the beginning of 2007! :)

So, what does this mean going forward?

It means that we will begin publishing content on a regular basis again once we've began serious work with Windows Server vNext.

We have a whole host of lab hardware on the way that has a lot to do with what's happening in the new version of Windows Server that ties into our v2 for Be the Cloud and our own Cloud services backend.

We're also establishing some new key vendor relationships that will broaden our solution matrix with some really neat new features. As always, we build our solution sets and test them vigorously before considering a sale to a client.

And finally, we're reworking our PowerShell library into a nice and tidy OneNote notebook set to help us keep consistent across the board. This is quite time consuming as it becomes readily apparent that many steps are in the grey matter but not in Notepad or OneNote.

Things we're really excited about:
  • Storage Spaces Direct (S2D)
  • Storage Replication
  • Getting the Start Menu back for RDSH Deployments
    • Our first deployment on TP2 is going to happen soon so hopefully we do indeed have control over that feature again!
  • Deploying our first TP2 RDS Farm
  • Intel Server Systems are on the S2D approval list!
    • The Intel Server System R2224WTTYS is an excellent platform
  • Promise Storage J5000 series JBODs just got on the Storage Spaces approved list.
    • We've had a long history with Promise and are looking forward to re-establishing that relationship.
  • We've started working with Mellanox for assistance with RDMA Direct and RoCE.
  • 12Gb SAS in HBAs and JBODs rocks for storage
    • 2 Node SOFS Cluster with JBOD is 96Gbps of aggregate ultra-low latency SAS bandwidth per node!
  • NVMe based storage (PCIe direct)
The list could go on and on as they come to mind. :)

Thank you all for your patience with the lack of posting lately. And, thank you all for your feedback and support over the years. It has been a privilege to get to know some of you and work with some of you as well.

We are most certainly looking forward to the many things we have coming down the pipe. 2015 is shaping up to be our best year ever with 2016 looking to build on that!

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 14 April 2015

RDS Error: RemoteApp - The digital signature of this RDP File cannot be verified.

The following error was received on a client’s system this morning:

image RemoteApp

The digital signature of this RDP File cannot be verified. The remote connection cannot be started.

In this case the RDSH is using self-issued certificates for both Broker services. They had expired.

  1. Server Manager –> Remote Desktop Services –> Collections –> Tasks –> Edit Deployment Properties
  2. Click Certificates
  3. Click on the first Broker service and then the Create new certificate button
    • image
  4. Set a password and save to C:\Temp\2015-04-14-SelfIssuedSSL.pfx
  5. Click on the second Broker service and Select an Existing Certificate
  6. Choose the above newly created certificate

In the case where our client’s domains are .LOCAL or .CORP or some other non-Internet facing TLD we leave those two self-issued.

If we have an Internet facing domain then we use a third party trusted certificate as can be seen in the snip above.

Because we are deploying a lot of Remote Desktop Services solutions we always use an Internet TLD for the internal domain after making sure the client owns that domain and its registered for a decade.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 31 March 2015

Our Default OU and Group Policy Structure

Over the years between our experiences with the Small Business Server Organizational Unit (OU) and Group Policy Object (GPO) structures plus wearing out a few copies of Jeremy Moskowitz’s books we’ve come to hone our Group Policy configurations down to an _almost_ science. ;)

Today, with our own Small Business Solution (SBS) in production we use the following OU and GPO structure as a starting point:

imageWe tailor all GPO settings around the intended recipient of those settings.

We use the WMI filters to delineate desktop OS versus Server and DC based operating systems. Note that the GPOs for those two sets of systems are not present in the above snip.

They would be:

  • Default Update Services Client Computers Policy
  • Default Update Services Server Computers Policy

Both enable Client-Side Targeting for WSUS managed updating.

NOTE: We _never_ edit the Default Domain Policy or the Default Domain Controllers Policy. EVER!

When we need something we create the GPO and link it to the OU containing the intended recipient objects or we scope the GPO via Security Group membership.

Some GP Pearls

All GPOs scoped to computers have the User Configuration settings disabled while GPOs scoped to users have the Computer Configuration settings disabled.


We don’t use Group Policy Loopback Processing. There’s just too much room for unintended consequences. Our structure above gives us the flexibility we need to hone our GPO settings down to a user or computer if need be.

Filters are OU membership, Security Group membership, or WMI Filtering.

GPO settings are like Cascading Style Sheets. Settings cascade from the domain level down through the OU structure to the recipient object. The closer the GPO to that object the more weight that GPO’s settings have.

We do not duplicate settings or put opposite settings in GPOs further down the OU structure. We create and link our GPOs accordingly.

We always enable the Group Policy Central Store (blog post) on our DCs. This makes things so much easier in the long run!

We always enable the AD Recycle Bin and if at all possible have the Forest and Domain at the latest available OS version level.

We test any intended changes we intend to make in a lab setting on a restored version of our client’s networks first! We test _any_ and _all_ intended settings changes/additions in a lab first!

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Thursday 12 March 2015

Hyper-V: Broadcom Gigabit NICs and Virtual Machine Queues (VMQ)

Here is an explanation posted to the Expert’s Exchange forum that we believe needs a broader audience.


VMQ is a virtual networking structure allowing virtual Switch (vSwitch) networking to be processed by the various cores in a CPU. Without VMQ only one core would be processing those packets.

In a Gigabit setting the point is moot since the maximum of 100MB/Second or thereabouts per physical port is not going to tax any modern CPU core.

In a 10GbE setting where we have an abundance of throughput available to the virtual switch things change very quickly. We can then see a single core processing the entire virtual switch being a bottleneck.

In that setting, and beyond, VMQ starts tossing vSwitch processes out across the CPU's cores to distribute the load. Thus, we essentially eliminate the CPU core as a bottleneck source.

For whatever reason, Broadcom did not disable this setting in their 1Gb NIC drivers. As we understand things the specification for VMQ requires it to be disabled on 1GbE ports.

VMQ enabled on Broadcom NICs has caused no end of grief over the last number of years for countless Hyper-V admins. With Broadcom NICs one needs to disable Virtual Machine Queues (VMQ) on _all_ Broadcom Gigabit physical ports in a system to avoid what becomes a vSwitch traffic.


The above is a summary of conversations had with networking specialists. If there are any corrections or specifics that all y’all have about the VMQ structures please feel free to comment! :)

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Tuesday 10 March 2015

Cluster: Asymmetric or iSCSI SAN Storage Configuration and Performance Considerations

We When we set up a new asymmetric cluster, or if one is using an iSCSI SAN for central storage, the following is a guideline to how we would configure our storage.

Our configuration would be as follows:

  • JBOD or SAN Storage
    • 6TB of available storage
  • (2) Hyper-V Nodes
    • 256GB ECC RAM Each
    • 120GB DC S3500 Series Intel SSD RAID 1 for OS
    • Dual 6Gbsp SAS HBAs (JBOD) or Dual Intel X540T2 10GbE (iSCSI)

There are three key storage components we need to configure.

  1. Cluster Witness (non-CSV)
    • 1.5GB Storage
  2. Common Files (CSV 1)
    • Hyper-V Settings Files
    • VM Memory Files
    • 650GB Storage
  3. Our VHDX CSVs (balance of 5,492.5GB split 50/50)
    • CSV 2 at 2,746.25GB
    • CSV 3 at 2,746.25GB

Given that our two nodes have a sum total 512GB of RAM available to the VMs, though we’d be provisioning a maximum of 254GB of vRAM at best, we would set up our Common Files CSV with 650GB of available storage.


We split up our storage for VHDX files into at least two Storage Spaces/LUNs. Each node would own one of the resulting CSVs.

We do this to split up the I/O between the two nodes. If we had just one 5.5TB CSV then all I/O for that CSV would be processed by just the owner node.

It becomes pretty obvious that having all I/O managed by just one of the nodes may present a bottleneck to overall storage performance. At the least, it leaves one node not carrying a share of the load.

Performance Considerations

Okay, we have our storage configured as above.

Now it’s time to set up our workloads.

  • VM 0: DC
  • VM 2: Exchange 2013
  • VM 3-6: RDHS Farm (Remote Desktop Services)
  • VM 7: SQL
  • VM 8: LoBs Line-of-Business apps), WSUS, File, and Print

Our highest IOPS load would be SQL followed by our two RDSH VMs and then our LoB VM. Exchange likes a lot more RAM than I/O.

When provisioning our VHDX files we would be careful to make sure our high IOPS VMs are distributed between the two CSVs as evenly as possible. This way we avoid sending most of our I/O through one node.

Why 650GB for Common Files?

Even though our VM memory files would take up about 254GB of that available storage one also needs space for the configuration files themselves, though they are quite small in size, and also additional space for those just-in-case moments.

One such moment is when an admin pulls the trigger on a snapshot/checkpoint. By default the differencing disk would be dropped into the Common Files storage location.

One would hope that monitoring software would throw up an alarm letting folks know that their cluster is going to go full-stop when that location runs out of space! But, sometimes that is _not_ the case so we need room to run our needed merge process to get things going again.

How do I know?

Okay, all of the above is just fine and dandy and begs the following question: How do I really know how the cluster will perform?

No one client’s environment is like another. So, we need to make sure we take performance baselines across their various workloads and make sure to talk to LoB vendors about their products and what they need to perform.

We have a standing policy to build out a proof-of-concept system prior to reselling that solution to our clients. As a result of both running baselines with various apps and building out our clusters ahead of time we now have a pretty good idea of what needs to be built into a cluster solution to meet our client’s needs.

That being said, we need to test our configurations thoroughly. Nothing could be worse than setting up a $95K cluster configuration that was promised to outperform the previous solution only to have that solution fall flat on its face. :(

Test. Test. Test. And, test again!

NOTE: We do _not_ deploy iSCSI solutions anywhere in our solution’s matrix. We are a direct attached storage (SAS based DAS) house. However, the configuration principles mentioned above apply for those deploying Hyper-V clusters on iSCSI based storage.

EDIT 2015-03-26: Okay, so fingers were engaged prior to brain on that first word! ;)

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Thursday 12 February 2015

Hyper-V: Set Up An Internal Network For Host/Guest File and Service Sharing

Here’s a quick and simple How-To for setting up network communication between a Hyper-V host, both Server and Windows 8/8.1, and any guests.

  1. Hyper-V Manager --> R.Click ServerName --> Virtual Switch Manager --> New --> INTERNAL.
    • Note the description for the internal vSwitch.
    • image
  2. Click APPLY and OK
  3. Assign the newly created vSwitch – Internal to the required VM(s)
    • image
  4. On the HOST: Start –> NCPA.CPL [Enter] –> Set an IPv4 IP Address
    • image
    • Use a different subnet for this network than anything else on the host’s NICs.
  5. On the Guest: Start –> NCPA.CPL [Enter] –> Set an IPv4 IP Address
    • image
    • Note the host and the guest are assigned an IP on the same subnet.
  6. On either the Host or the Guest open Windows Explorer
  7. \\IPv4Address\
  8. Authenticate
    1. To host: Either MachineName\Username or DomainName\Username
    2. To guest: MachineName\Username
  9. Copy and paste files and access services as expected
    • image 

If there is a need to work with UNC paths, HTTPS and certificates, and more then make sure to set up a small VM running DNS and ADDS if needed. One could also put DHCP on that VM to make addressing simple.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Monday 2 February 2015

Sample Client Phish Prevention E-mail

Here’s a sample of an e-mail we would send to our clients on a semi-frequent basis to help keep users wary and informed.

If there is ever a doubt about an e-mail claiming to represent anything from a bank to a newspaper NEVER click on any link in that e-mail.

Open a new browser session and navigate directly to the purported site and log on there.

In today’s day and age we need to be very mindful of clicking on anything.

For anything with a link in it hover your mouse over and a small pop-up will happen:

The above snip came from hovering over the Unsubscribe link in the email below.

As a rule, NEVER click on a link on any e-mail with perhaps the exception of the ones CONTACT sends out with software update links. Even then, hover your mouse over the link in her e-mail just in case someone is specifically targeting the firm!

One more point: We’ve been seeing a LOT of Word and Excel based macro virus transmission files. Anyone sending something should be requested to do so in PDF format if at all possible. For folks on the not-so computer savvy side they can click on FILE –> SAVE AS –> PDF (change Save As Type to).

While PDF files are not much safer than Office files they, at least at this point, marginally better. ;)

Happy Monday everyone. :)

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Tuesday 27 January 2015

Business Guidance Pearls Mentoring Opportunity

Our blog post on Some IT Pro Business Guidance Pearls has generated a _lot_ of questions! Thank you for that. :)

So, how about this?

Third Tier has a special on a block of 3 hours that ends in a few days.

Drop in to the Third Tier Help Desk, register, and purchase a time block. From there, open a ticket: Business Guidance Pearls Mentorship.

I would pick up the ticket and get in touch about scheduling our time together.

The structure would be:

  • 1 Hour: Practice Assessment and Goals
  • 1 Hour: Goals Roadmap
  • 1 Hour: Goals Implementation Steps

I was very fortunate to have a former employer that worked very hard to teach me how to run an I.T. business. By “run” we’re talking about a lot more than just the bookkeeping and cash flow aspects.

Believe me when I say this, you’d not regret any minute spent. We’d look at the big picture right through to the details to facilitate growth in your I.T. Pro practice.

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Thursday 15 January 2015

Some IT Pro Business Guidance Pearls

Here are some bits and pieces of business wisdom that I’ve gathered over the years. Much of my initial business formation came from my first employer out here Larry MacDonald while working for Logical Computer Company.

  • Keep a business journal (for me it’s my blog, Twitter, and forum helps)
  • Document everything (take pictures of everything with SmartPhone, use Snip in Windows ALL THE TIME)
  • Create build documentation for everything
    • We have builds for clusters, Exchange setups, Exchange migrations, SBS setups, SBS Migrations, More
  • Be consistent (build documentation helps)
  • Use Tasks in Outlook and on the phone to track everything
  • Be disciplined in tracking, responding, and being present to clients
  • Spend 10%-20% on R&D
  • Spend 10%-15% on lunches, dinners, and such with others
  • Put 10% away for a rainy day
  • Get involved with user groups or start one
  • Get a _good_ accountant and keep them

As far as recurring revenue:

  • IMSNHO, blended is better than full MSP
  • ~$60/User to $110/User for:
    • Server OS patch, Server App patch, and Microsoft App patch/install management
    • Desktop OS patch and Microsoft App patch management
    • A/V Management along with e-mail sanitation (we use ExchangeDefender)
    • Remote Server Monitoring and management included
    • On-site not included
    • Phone and e-mail support beyond 15 minutes not included
  • Offer backup rotation with quarterly full bare metal or hypervisor restore
    • $150-$250 per OS per month
    • Need a dedicated box for this (Intel S1400FP4 with 96GB ECC, and RAID 6 across spindles or SSDs)


  • Non-contract break/fix:
    • $250/Hr immediate response
    • $200/Hr 4 hour response
    • $175/Hr 24 hour response
  • Contract on above
    • 4 Hour response included
    • Immediate response at 1.5 rate
    • Time Blocks offered at discounted rates

*Response being an acknowledgement of the request/ticket.

There are a lot of benefits over time as far as financial stability but also client relationships become a lot more stable and long term with support contracts in place versus a break/fix model.

We soon discover the clients that value our IT services and those that don’t when we move into the above model. What business that runs a fleet does not have a crew of mechanics to maintain that fleet? Why is IT infrastructure any different?

A major plus is in the routines that we build up. Our schedule gets a lot more stable and predictable. While we are still at our client’s beck and call we now have an established set of boundaries as far as how, when, and where the help would be provided.

We can have a few more evenings a week pursuing other things and _not_ looking at screens! ;)

Philip Elder
Microsoft Cluster MVP
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business