Saturday 22 December 2018

VS Code for PowerShell: IntelliSense Code Highlighting Not Working?

This is what VS Code looks like on _all_ machines used in the shop and elsewhere but one:


Note the great colour coding going on. That's IntelliSense working its magic to render the code in colour!

The errant machine looks like this:


Note the distinct lack of colour coding going on. :(

We removed and re-installed VS Code, the Extensions, and anything else we could with no change.

After an ask on a PowerShell list a suggestion was made to check the VS Code theme.

Sure enough, on the problematic one:


VS CODE: Dark (Visual Studio) Colour Theme

While on a functioning setup:


VS CODE: Dark+ (default dark)

For whatever reason, the latter colour theme seems to break things.

Now that we have that figured out, we can move on to coding! :D

Merry Christmas and a Happy New Year's to everyone!

Hat Tip: Shawn Melton MVP

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Wednesday 12 December 2018

Intel Technology Provider for 2019

We just received word of our renewal for the Intel Technology Provider program:


We've been system builders since the company began in 2003 with my building systems for more than a decade before that!

One of the comments that gets made on a somewhat frequent basis is something along the lines of being a "Dinosaur". ;)

Or, this question gets asked quite a lot, "Why?"

There are many reasons for the "Why". Some that come off the top are:

  • We design solutions that meet very specific performance needs such as 150K IOPS, 500K IOPS, 1M IOPS and more
  • Our solutions get tested and thrashed before they ever get sold
    • We have a parts bin with at least five figures worth of broken vendor's promises
  • We have a solid understanding of component and firmware interactions
  • Our systems come with guaranteed longevity and performance
    • How many folks can say that when "building" a solution in a Vendor's "Solution Tool"?
  • We avoid the finger pointing that can happen when things don't live up to muster

The following is one of our lab builds. A two node Storage Spaces Direct (S2D) cluster utilizing 24 Intel SSD DC-4600 or D3-4610 SATA series SSDs flat meaning no cache layer. The upper graphs are built in Grafana while the bottom left is Performance Monitor watching the RoCE (RDMA over Converged Ethernet via Mellanox) and the bottom right is the VMFleet WatchCluster PowerShell.


We just augmented the two node setup with 48 more Intel SSD D3-4610 SATA SSDs for the other two nodes and are waiting on a set of Intel SSD 750 series NVMe PCIe AiCs (Add-in-Card) to bring our 750 count up to 3 per node for NVMe cache.

Why the Intel SSD 750 Series? They have Power Loss Protection built-in. Storage Spaces Direct will not allow any cache devices hold any data in the storage's local cache if it is volatile. What becomes readily discoverable is that writing straight through to NAND is a very _slow_ process relative to having that cache power protected!

We're looking to hit 1M IOPS flat SSD and well over that when the NVMe cache setup gets introduced. There's a possibility that we'll be seeing some Intel Optane P4800X PCIe AiCs in the somewhat near future as well. We're geared-up for a 2M+ run there. :D

Here's another test series we were running to saturate the node's CPUs and storage to see what kind of numbers we would get at the guest level:


Again, the graphs in the above shot are Grafana based.

The snip below is our little two node S2D cluster (E3-1270v6, 64GB ECC, Mellanox 10GbE RoCE, 2x Intel DC-4600 SATA SSD Cache, 6x 6TB HGST SATA) pushing 250K IOPS:


We're quite proud of our various accomplishments over the years with our high availability solutions running across North America and elsewhere in the world.

We've not once had a callback asking us to go and pick-up our gear and refund the payment because it did not meet the needs of the customer as promised.

Contrary to the "All in the Cloud" crowd there is indeed a niche for those of us that provide highly available solution sets to on-premises clients. Those solutions allow them to have the uptime they need without the extra costs of running all-in the cloud or hybrid with peak resources in the cloud. Plus, they know where their data is.

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday 11 December 2018

OS Guide: Slipstream Updates Using DISM and OSCDImg *Updated

We have found out that we need to have the May Servicing Stack Update (SSU) KB4132216 _and_ the latest SSU which is currently KB4465659 in the Updates_WinServ folder we drop the Cumulative Update into for the Windows Server 2016 slipstream run.


Note that the current version of the script points to Server 2019. Please use that as a base to tweak and create a set of folders for Windows Server 2016 and Windows 10 updates.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Thursday 6 December 2018

Error Fix: Trust Relationship is Broken

Here's a quick post on fixing a broken trust situation when the local administrator username and password is a known commodity.

On Windows 7:

  1. Windows Explorer
  2. Right click My Computer/This PC --> Properties
  3. Change settings for Computer Name
  4. Change button
  5. Domain: Setting Now: DOMAIN.LOCAL
    1. Change to DOMAIN (delete .Local)
    2. Credential
  6. Reboot

That process will fix things for Windows 7 unless PowerShell is up to date then for all others including it:

  1. Log on with local admin user
  2. Reset-ComputerMachinePassword -Credential DOMAIN\DomainAdmin
  3. Log off
  4. Log on with domain user account

That's it.

If you know any other methods, especially for situations where the local admin username and password is an unknown or all local admin accounts are disabled feel free to comment or ping!

Thanks for reading. :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday 3 December 2018

A word of caution: Verify! Everything!

We get to work with a whole host of clients and their industries but as a contractor we also get to work with a wide variety of IT Pros and IT Companies.

Many times we get involved in a situation that's an outright pickle.

Something has gone sideways and the caller is looking for some guidance, some direction, and a bit of handholding because they are at a loss.

Vendor Blame Game

Some of those times the caller is in the middle of a tongue wagging session between a set of vendors blaming the other for the pickle.

We were in that situation back in the day when a Symantec BackupExec (BUE) solution failed to restore. The client site had two HP DAT tape libraries with BUE firing "All Good" reports.

We found out those reports were bad when the client's main file server went blotto.

We were in between the storage vendor and their products, Symantec, and HP. It was not a pretty scene at all. In the end, it was determined _by us_ that BUE was the source of the problem because it did not do any kind of verify on the backups being written to tape despite the setting being there to do so.

We were fortunate that we had multiple redundant systems in place and managed to get most of the data back except one partner's weeks' worth of work. We had to build out a new domain though.

So, why the blog post?

Because, it's still happening today.

Verify, Verify, and Verify Again

We _highly suggest_ verifying that all backup products and backup services are doing what they say they are doing.

If the service provider charges for a test failover then do it anyway. Charge the fee back to the client because once the process has run successful or not things are in a better place either way.

Never, _ever_ walk into a disaster recovery situation without ever having tested the products that are supposed to save the client's business. Period.

Yeah, there are times where something may happen before that planned failover. That's not what we're talking about here.

What we are after is testing to make sure that the vendor's claims are indeed true and that the solution set is indeed working as planned.

The last place we need to find out that our client's backups are _not_ working is when their servers, virtual machines, cloud services vendors have gone blotto.

Out of the Cloud Backup

We always make sure we have a way to back up any cloud vendor's services to our client's site. It just makes sense.

Our trust is a very fickle thing.

When it comes to our client's data we don't give our full trust to any vendor or solution set.

We _always_ test the backup and recovery processes so that we're not blindsided by things not going as planned or any "hidden fees" for accessing the client's data in a disaster recovery situation.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Friday 30 November 2018

Some Thoughts on the Starwood/Marriott Reservations Database Breach

Note: This post will _not_ be a happy one.

First: The announcement page: Starwood Guest Reservation Database Security Incident Marriott International

That page is garbage, rubbish, and so much more. It exemplifies today's epidemic of spin instead of truth and responsibility for an error that harms others.



"Marriott values our guests and understands the importance of protecting personal information."

That is a complete crock of male bovine excrement.

Especially when we look to the following:


"After receiving the internal security alert, we immediately engaged leading security experts to help us determine what occurred."

Okay, so just when did that security alert come in?


"On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database."

Cool, so things look like they got caught really quick right? That seems to be the way this article is written right?



"Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014."

Let's rephrase all of the above shall we:

Marriott: We let unauthorized access to our reservation database happen for FOUR YEARS.

Yeah, "We at Marriott/Starwood really care about your data/PII." Really. All said with a smile.


In our case, the CC used for our various stays has expired very recently. So, we should be protected that way. And, to further protect things we use KeePass with unique passwords for any and all online resources with unique e-mail addresses set up for each of them (we're doing this more and more).

Suffice it to say, if the Marriott really cared about risk to our PII (Personally Identifiable Information) the reservations system would have been segmented with designated access and no Internet access. We've been applying our knowledge of network setup to segment our client's networks for years. Especially with PCI scans being somewhat generic and different depending on what org is running the scans.

Oh, and note that credit card information was stored in there too. How in the world did that pass muster with PCI scans?


LMHYWT (Let me help you with that) " … two components needed to decrypt payment card numbers and Marriott not able to rule out both were taken."

Tis a sad day indeed when spin and lawyer speak win out over a true "Mea Culpa" we really *insert expletive here* up.

This Marriott incident is a gross breach of trust and it is time companies be held liable for such.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Tuesday 13 November 2018

New PowerShell Guides and DISM Slipstream Process Updated

We've added two new PowerShell Guides:

We've also updated the page with some tweaks to using DISM to update images in the Install.WIM in Windows Server. The process can also be used to slipstream both Servicing Stack Updates (SSUs) and Cumulative Updates (CUs) for both Windows Server and Windows Desktop.

Thanks for reading! :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Tuesday 6 November 2018

Apple MacBook Pro: Upgrading OS X Snow Leopard 10.6 to El Capitan 10.11 with 2 Factor Authentication On

Wow, what an adventure.

We have a MacBook Pro 13" early 2009 laptop here in the shop that has been sitting idle for a while.

We installed a new SSD in the unit and bumped the RAM up to 8GB.

Then, on to installing a fresh copy of Snow Leopard 10.6 via the installer DVD.

We needed to use the Disk Utility in the installer to set up a partition prior to being allowed to install the OS.

Once in, we went through the updates process.

Then, on to upgrading OS X to El Capitan 10.11.

What a pain. Because we are on what is essentially an ancient OS version all of the apps were uncooperative due to the 2 Factor Authentication (2FA) that is enabled on our Apple ID.

Safari would not work with Apple's sites for authentication either due to SSL compatibility issues.

Searching meant using buckshot terms to try and figure out exactly what needed to be done to allow the upgrade to proceed in the App Store.

The long and short of it found here is to do the following:

  1. Open Safari and navigate to this Apple Support page: How to upgrade to OS X El Capitan
  2. Scroll down to Step 4 and click on the Get El Capitan link to bring up the App Store
  3. Click the Get button in the store
  4. On a trusted device such as an iPhone
    1. Tap into Settings --> Your Name --> Password & Security
    2. Tap on the Get Verification Code at the bottom of that page
  5. On the MacBook Pro enter the Apple ID and the Password
    1. YourAppleID@YourDomain.Com
    2. YourAppleIDPassword123456
      • 123456 = Verification Code

The verification code gets tagged on to the password at the end as above.

It's a monster weighing in at 6.21GB so a good fast connection should be used to download this one!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Friday 2 November 2018

Veeam Error: Unable to allocate processing resources. Error: On-host proxy [ServerName] requires upgrade before it can be used.

We rebooted one of our Hyper-V hosts that has a number of VMs hosted on it.

The Veeam setup was just completed with the VMs set up right after.

The host was having some network difficulties as it turned out that one of the two ports in the host LBFO Management team was plugged into the VM's switch instead of our setup network.

Once corrected and a reboot later and Veeam was throwing an error due to "Server Not Found".

We had set up the backup based on the IP address the server had. Low and behold that address had changed after the reboot.

So, we set up a new Managed Server based on the new IP and updated the Backup Job.

We fired the backup but it failed:

11/2/2018 4:22:55 PM :: Unable to allocate processing resources. Error: On-host proxy [ServerName] requires upgrade before it can be used. 

After some searching on Veeam's forums this post came up: On-host proxy requires upgrade

After reading through the forum thread it was the very last post that got things going for us:

  1. Restart Veeam Backup Service
  2. Restart Veeam Broker Service
  3. Fire the backup
  4. Success!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday 15 October 2018

Server 2019 and ADDS: FRS Not Supported - Migrate to DFSR

We went to DCPromo a newly stood up Windows Server 2019 VM into an existing domain and it would not let us do so.

Suffice it to say, we needed to migrate File Replication Service (FRS) to Distributed File System Replication (DFSR).

The process is actually quite simple so long as Active Directory and replication are healthy.

Ned Pyle has an article that has three methods in it:

We followed method one as a just-in-case during business hours. No hiccups were experienced and once done:


A simple way to keep an eye on things is to open File Explorer and plug the following in the Address Bar: \\Domain.Com

So long as DNS is healthy and SYSVOL and NETLOGON are there the process is humming along as expected.

NOTE: Make sure a backup is taken of the PDCe and a System State as well!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Friday 14 September 2018

WARNING: Edge (Sync) Ate All Favourites - Favourites Gone!

We've been doing _a lot_ of work setting up a Grafana/InfluxDB/Telegraph monitoring and history system lately.

The following is our custom Kepler-64 Storage Spaces Direct 2-node cluster being tracked in a Grafana Dashboard that we've customized:


Grafana graphs, PerfMon RoCE RDMA Monitoring, and VMFleet Watch-AllCluster.PS1 (our mod)

Needless to say, a substantial number of links to various sites about running the above setup on both Windows and Ubuntu were lost after something seemingly went wrong. :(

Edge (Sync ?) Hiccups then Pukes

As we were quite busy throughout the day and Edge was being very uncooperative we started using Firefox for most of the browsing throughout the day.


Edge: Favourites Bar Shortcut Count Drastically Trimmed

The first response once things started misbehaving should have been was to use the Edge Favourites Backup feature and get them out!

When we opened Edge later in the day this is what we were greeted with:


Edge: Favourites Bar Shortcuts Gone!

As a small caveat one of the behaviours with Edge has been for it to either go unresponsive requiring a Task Manager Kill or when another Edge browser session was opened for it to not allow Paste or Right Click in the Address Bar or any Favourites, History, or other buttons to be shown.

So, Task Manager Kill? Nope. They were not there.

Log off and back on again? Nope.

Reboot the machine? Nope.

Both the Favourites Bar content and _all_ of our Favourites were gone.

Back Them Up!

During the above day when things started to misbehave the next step _should have been_ to grab one of the tablets they were syncing to and run a backup process without allowing the tablet to connect to WiFi and sync! Ugh, hindsight is 20/20. :P

And, much like the advice we always start out with when training users on the use of Office products the very first step they need to take is to _save_ their work before doing anything! And, once the work is done, to _save_ their work. Well, that advice is something that we will be taking from now on with regards to Edge and Favourites.

After a big day of Favourites building a backup should be taken.

So, Where Are Those Favourites?

Why, oh why do software vendors move my cheese?

In this case, the original location for those Favourites when using IE back in the day was OneDrive. If there was a hiccup somewhere between the number of different clients OneDrive Sync would append the name of the machine to the conflicting shortcuts and we'd be left with doing a quick Search & Destroy post mortem. No biggie. Not so with Edge.

Thanks to Michael B. Smith via a list we were pointed to:

The location that Edge stores those favourites is here:

  • Stored under %LOCALAPPDATA%\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe\AC\MicrosoftEdge\User\Default\DataStore\Data\nouser1\120712-0049\DBStore
  • Database Name: Spartan.edb

Change the somewhat hidden dropdown to Favourites and:


Some Edge Favourites Post Backup Import

What Does All This Mean? Edge Bug

It means that there's a serious bug somewhere in the Edge setup with Data Loss being one possible result.

It means that, for now one needs to run the Edge Export process to back those Favourites up after a serious day of adding to that list!

  1. In Edge click the Favourites/History/Downloads button
    • image
  2. Click the Favourites Star if they are not shown as above
  3. Click the Gear
  4. Click the Import from another browser button
    • image
  5. Click the Export to file button
    • image
  6. Choose a location and give the file a name
    • We drop ours in OneDrive to keep it backed up
    • image
    • Naming convention: DATE-TIME-Location.HTML

The above process will at least help mitigate any choke in the Edge Favourites setup that may happen.

Warning Note

IMPORTANT NOTE: Edge does not have any kind of parsing structure for the import process.

We cannot pick and choose what to import, and, if there are still Favourites _in the database_ they may disappear/be deleted when importing!

If the bulk of the Favourites are still there then an alternative to a wholesale import would be to open the backup .HTML page and click on the needed links and Favourite them again. *sigh*


What does all of this mean?

Considering that we've lost data there's a very serious problem here. In our case, we're talking about a very long and full day's worth of bookmarks/favourites gone. :(

For now, it means back those favourites up _a lot_ when doing critical work that requires knowledge keeping!

Oh, and we need to set aside some time to delve into the NirSoft utility linked to above to see if there are features in there to help mitigate this situation.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Monday 10 September 2018

Security: RBC Royal Bank: Best laid plans of mice and men

We did some banking work with our bank, RBC Canada. In the process they sent us a few "Secure Document Access" requests that the agent provided the password for via a phone conversation.

When the first one came in, it was a bit of a system shock.


RBC Royal Bank "Secure Message"

The highlight is ours. Huh?!?

Given the nature of today's phishing attacks a phone call was very quick to happen to our contact after receiving the above to verify its legitimacy.

We received a number of subsequent "secure" e-mails using the same method.

The encryption process we use, and our clients use, on the ExchangeDefender (xD) system is a link to an Internet property owned by xD with the appropriate SSL properties in place to assure the recipient that they are in the right place. That's after we indicate to the recipient in a prior e-mail of the upcoming process to obtain the encrypted content.

The RBC Royal Bank method is close to that but why the .HTM attachment requirement? That's just plain weird. :S

Sure enough, this is what was in an Inbox here this morning:


Phishing Message

It's a poorly crafted phish attempt at best.


E-mail Header

The trail is pretty clear as far as where it came from and the "how" looks to be fairly clear as well.

All it would have taken was a bit better in the way of timing on the phisher's part and a bit of distraction on our part and BOOM we could have been hooked. :(

RBC Royal Bank Canada needs to change their secure document transmission methodologies please.

And, Microsoft, please give us built-in DKIM abilities for on-premises Exchange instead of keeping that to online properties only. That's not polite in the least.*See Note Below

Outlook Header How-To

Outlook users, here's how to get the header information shown above:

  1. Double click on the e-mail
  2. Click the Message tab
  3. Click the break-out button on the bottom right of the Tags category
    • image
  4. Click anywhere in the small information window
    • image
  5. Keyboard:  CTRL+A then CTRL+C
  6. Click Close and close the e-mail
  7. Paste the content into the destined app (we use Notepad)

After examining a few headers it gets pretty easy to identify the legit and illegitimate messages hitting our Inbox every day. While the process may be a bit time consuming, figuring out whether something is legit or not could be the difference between DELETE and an encryption event or Inbox/Contacts harvesting.

Happy Monday everyone and thanks for reading! :)

2018-09-10 EDIT: Oops, that Microsoft sentence should have been CUT along with the other sentences that were in a previous paragraph. Suffice it to say, we've been working on DMARC/DKIM requests and discovered that Microsoft seems to be holding DKIM off from on-premises Exchange. Thus, we need to go third party to get to use that business critical security feature. :(

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Friday 7 September 2018

Surface Pro 4 or Surface Book phantom touches

A couple of days ago one of our docked Surface Pro 4 (SP4) units started to experience what looked like an ongoing touch in the middle between the two screens near the bottom.

A reach over to the SP4 and a touch to figure out where the phantom touches were happening brought about the discovery that it was near the bottom righthand corner of the touch screen.


Okay, so pull it off the dock, give it a good screen cleaning, and for good measure blow it out since it's been sitting there for a while.

It seemed okay but the problem came back.

A reboot would not fix it either.

So, in to the Microsoft Store we went today. The tech took it into the back for a few minutes then came back and said it's good to go.

What was done? They have a calibration tool there at the store to recalibrate the touch screen.

Since that was done the SP4 has been behaving. We'll see, but let's hope it's all good.

As it turns out, the task run by the Microsoft Tech is in the above KB article.

So, if experiencing phantom touches download and run the calibration tool found in that article.

UPDATE 2018-09-10: They came back. :(

As  an FYI, this time around instead of hugging one side of the screen we had a line of touches running across the screen.

So, instead of heading back to the store we ran through the hotfix e-mail process and ran the utility.

The phantom touches disappeared again. We'll see for how long.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Wednesday 5 September 2018

When Software Bugs can Kill: Dodge RAM 1500 Cruise Control Bug

A while back there was a scary moment where the cruise control in our 2016 RAM 1500 refused to release via any of the buttons on the steering wheel.

In that moment, rushing up on a vehicle fairly far ahead of us and going over a bridge, there were only a few options left:

  1. Brakes release the cruise
  2. Try and Power Down & Brake
  3. Ditch @ 100KM/H on a bridge bank

Fortunately, the cruise released when the brakes got hit, we were able to avoid rear ending the person in front of us, and in fact the cruise system crashed right after the brakes were applied.

When addressing the complaint with Dodge and the dealer there's the, "Oh, no we didn't realize there was a problem there" type of response.

Well, obviously there was a problem that needed to be addressed as the bellow letter shows.


So, being that the problem could be life threatening we took the truck in to have it flashed as soon as the above showed up and the dealer had a spot open.

Here we are a few weeks or more from the day the flash was done and the behaviour happened again this morning. :S

Fortunately, at that moment there was less panic and more "Oh, hopefully the brakes work to kick it off this time" happening. ;)

A call into the dealer and the service tech indicated they'd have a conversation with the service manager since the tech was not sure whether there would be a "re-flash" of the module or the problem would need to head further up the Fiat Chrysler Canada food chain.

The Customer Service Wall

It's understandable that companies try and hide their mistakes. Yet, time and time again it's been shown that companies that are up-front about mistakes made and changes made to mitigate or eliminate them happening again tend to do quite well. It seems the lawyers tend to win this argument, and the ridiculous cases that have forced the issue, to keep things relatively hidden.

For the end-user it's very frustrating to face that "Customer Service Wall" with virtually no hope of getting anywhere beyond the person standing in front of that wall. In this case, it's the "Service Technician" that is in front of the Wall.

The reality is that the Customer Service Wall is a very well designed system to keep the person/user who is essentially paying the bills as far away from the manufacturer/vendor as is possible.

The same is true of voice recognition systems that users/customers hit as soon as they call "Customer Service".

There is no lack of irony in the above two sentences.


Sadly, the situation with the Dodge software bug, which can be deadly, will remain a mystery for a little while longer. With a bit of awareness, and a good dose of caution, we will be able to mitigate the bug's release block and not hit anyone including the ditch.

But for now, we wait until the folks that do the programming can figure out where the bug really lies and hopefully fix it … without introducing an even deadlier bug.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Wednesday 29 August 2018

Legacy Windows XP for Industrial Machine Access and Management and Accounting Apps

There are quite a few systems out there that still use Windows XP or an earlier operating system to run the equipment.

So, what do we do when we need to get access to one of these kinds of machines?

Well for one, we make sure they are completely isolated and not accessible from anywhere except perhaps one secure jump point.

For another, when we do need to access the legacy system here's one method that allows for maintaining the legacy system's isolation:

  1. Enable RDP Inbound on the legacy system (Windows)
  2. Set up a vanilla Windows 7 Service Pack 1 VM that is set to not update
    • This would be our jump point
    • The Win7 VM would be left off except when needed
    • If need be, set this VM up on a laptop that can be plugged in to the legacy system's network
  3. Set up any needed tools on the Win7 VM
    • RMM, Remote Desktop Shadow/Sharing tools, Firefox (leave the base level IE in place), any needed tools
  4. Log on to the legacy Windows XP via RDP
    • Make sure Drive Redirection is enabled
    • Use Drive Redirection to transfer any files that won't go via Copy & Paste (Clipboard)
  5. Use the Win7 VM as the default work-from desktop
  6. When done, shut the Win7VM down
    • Unplug from the legacy network when done if using a laptop with the Win7VM

For legacy systems require some form of *NIX the above process can be used for a vanilla install of the needed distro and kept offline until needed.

The principle at work here is to keep the legacy systems isolated from everywhere especially the Internet. And, to keep any jump points running an intermediary operating system that is too far back to keep safe and secure offline until needed.

As an FYI, we keep one or two legacy Windows 7 and Windows XP VMs in an offline state with legacy accounting applications installed as a just-in-case. There are times where a firm may need to go way back for a client file.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Wednesday 15 August 2018

PowerShell Paradise: Installing and Configuring Visual Studio Code (VS Code) and Git

It was Ben Thomas that gave me the prodding to look into Visual Studio Code (VS Code) for PowerShell coding and troubleshooting. I had an error in a PowerShell step that puzzled me. It turned out to be the auto-replaced hyphens that got introduced into that PowerShell step somewhere along the lines since I keep (kept) everything in OneNote.

There are several reasons why coding in any form are difficult for me, suffice it to say it took a few days to get over the, "Oh no, yet something else to learn" initial reaction to that prodding.

With a little downtime late last week, the opportunity presented itself to at least do a cursory search and skim of info on VS Code and PowerShell.

What I saw amazed me so much so that the time to learn became a non-starter.

First, download VS Code but don't install it right away.

Next, download GIT for Windows (there's other versions).

Now, there's a bit of a Catch-22 in this process as Git looks for VS Code and VS Code looks for Git.

Install VS Code and Git

Install order to make things simple:

  1. Install VS Code
    • image
  2. Run VS Code and ignore the Git prompt
    • image
  3. Install Git and choose VS Code
    • image
    • This is where things can get weird. If VS Code does not get started first, the Next button will not light up!
    • If not, leave this window, start VS Code, ignore the prompt, and close it.
    • Hit the Back button and then the Next button again and the Next button on this window should now be lit up.
  4. We chose Use Git from the Windows Command Prompt
    • image
  5. On the next window Use the OpenSSL Library
  6. Checkout Windows-style, commit Unix-style line endings
    • image
  7. Use MinTTY (the default terminal of MSYS2)
    • image
  8. We left the defaults for Configuring extra options
    • Enable file system caching
    • Enable Git Credential Manager

Once Git has been installed the next thing to do is to start VS Code and it should find Git:


Initialize the Git Repository

A few more steps and we'll be ready to code a new PowerShell script, or transfer in from whatever method we've been using prior.

  1. Create a new folder to store everything in
    • We're using OneDrive Consumer as a location for ours to make it easily accessible
  2. CTRL+SHFT+E --> Open Folder --> Folder created above
    • VS Code will reload when the folder has been chosen
  3. CTRL+SHFT+G --> Click the Initialize Repository button
    • image
  4. Click the Initialize Repository button with the folder we opened being the default location
  5. Git should be happy
    • image

Now, we're almost there!

VS Code Extension Installation

The last steps are to get the PowerShell Extension installed and tweak the setup to use it.

  2. Type PowerShell in the Marketplace search
  3. Click the little green Install button then after the install the little blue Reload button
    • image
  4. Additional VS Code Extensions we install by default
    1. Better Comments
      • Allows for colour coded # ! PowerShell Comments
    2. Git History
      • Allows us to look at what's happening in Git
    3. VSCode-Icons
      • Custom icons in VS Code

VS Code Quick Navigation

Once done, the following key strokes are the first few needed to get around and then there's one more step:

  • Source Control: Git: CTRL+SHFT+G
  • Extensions: CTRL+SHFT+X
  • Folder/File Explorer: CTRL+SHFT+E
  • User/Workspace Settings: CTRL+,

Create the Workspace

And finally, the last step is to:

  1. File
  2. Save Workspace As
  3. Navigate to the above created folder
  4. Name the Workspace accordingly
  5. Click the Save button

Then, it's Ready, Set, Code! :)

Note that the PowerShell .PS1 files should be saved in the Workspace folder and/or subfolders to work with them.

To start all new files in the PowerShell language by default add the following to User Settings

  1. CTRL+,
  2.     "files.defaultLanguage": "powershell"

One of the beauties of this setup is the ability to look at various versions of the files, much like we can with SharePoint and Office files, to compare the changes made over the history of the PowerShell Code.

Another is the ability to see in glorious colour!


Thanks to Ben Thomas' challenge PowerShell is already so much easier to work with!

2018-08-15 EDIT: Oops, missed one important step.

  1. Open Git GUI
  2. Click Open Existing Repository
  3. Navigate to the Workspace folder with the .git hidden folder
  4. Open
  5. Set the User's name and E-mail address for both settings
    • image
  6. Click Save

Git should be happy to Commit after that! :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Friday 10 August 2018

Intel/LSI/Avago StorCli Error: syntax error, unexpected $end FIX

We're working with an Intel setup and needed to verify the setup on an Intel RAID Controller.

After downloading the command line utilities, since we're in Server Core, we hit this:

C:\Temp\Windows>storcli /cx show

syntax error, unexpected $end

     Storage Command Line Tool  Ver 007.0415.0000.0000 Feb 13, 2018

     (c)Copyright 2018, AVAGO Technologies, All Rights Reserved.

help - lists all the commands with their usage. E.g. storcli help
<command> help - gives details about a particular command. E.g. storcli add help

List of commands:

Commands   Description
add        Adds/creates a new element to controller like VD,Spare..etc
delete     Deletes an element like VD,Spare
show       Displays information about an element
set        Set a particular value to a property
get        Get a particular value to a property
compare    Compares particular value to a property
start      Start background operation
stop       Stop background operation
pause      Pause background operation
resume     Resume background operation
download   Downloads file to given device
expand     expands size of given drive
insert     inserts new drive for missing
transform  downgrades the controller
/cx        Controller specific commands
/ex        Enclosure specific commands
/sx        Slot/PD specific commands
/vx        Virtual drive specific commands
/dx        Disk group specific commands
/fall      Foreign configuration specific commands
/px        Phy specific commands
/[bbu|cv]  Battery Backup Unit, Cachevault commands
/jbodx      JBOD drive specific commands

Other aliases : cachecade, freespace, sysinfo

Use a combination of commands to filter the output of help further.
E.g. 'storcli cx show help' displays all the show operations on cx.
Use verbose for detailed description E.g. 'storcli add  verbose help'
Use 'page=[x]' as the last option in all the commands to set the page break.
X=lines per page. E.g. 'storcli help page=10'
Use J as the last option to print the command output in JSON format
Command options must be entered in the same order as displayed in the help of
the respective commands.

What the Help does not make clear, and what our stumbling block was, is what exactly we're missing.

It turns out, that the correct command is:

C:\Temp\Windows>storcli /c0 show jbod
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Windows Server 2016
Controller = 0
Status = Success
Description = None

Controller Properties :

Ctrl_Prop Value
JBOD      ON

CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded

The /cx switch needed a number for the controller ID.

A quick search turned up the following:

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Thursday 9 August 2018

PowerShell: Add-Computer Error when Specifying OUPath: The parameter is incorrect FIX

We're in the process of setting up a second 2-node Kepler-64 cluster when we hit this when running the Add-Computer PowerShell to domain join a node:

Add-Computer : Computer 'S2D-Node03' failed to join domain 'Corp.Domain.Com from its current
workgroup 'WORKGROUP' with following error message: The parameter is incorrect.
At line:1 char:1
+ Add-Computer -Domain Corp.Domain.Com -Credential Corp\DomainAdmin -OUPath  …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     + CategoryInfo          : OperationStopped: (S2D-Node03:String) [Add-Computer], InvalidOperation
     + FullyQualifiedErrorId : FailToJoinDomainFromWorkgroup,Microsoft.PowerShell.Commands.AddComp

The PowerShell line it's complaining about is this one:

Add-Computer -Domain Corp.Domain.Com -Credential Corp\DomainAdmin -OUPath "OU=S2D-OpenNodes,OU=S2D-Clusters,DC=Corp,DC=Domain,DC-Com" -Restart

Do you see it ? ;)

The correct PoSh for this step is actually:

Add-Computer -Domain Corp.Domain.Com -Credential Corp\DomainAdmin -OUPath "OU=S2D-OpenNodes,OU=S2D-Clusters,DC=Corp,DC=Domain,DC=Com" -Restart

When specifying the OUPath option if there is any typo in that setting the nondescript error is "The parameter is incorrect."

We always prefer to drop a server or desktop right into their respective OU containers as that allows our Group Policy settings to take giving us full access upon reboot and more.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Wednesday 8 August 2018

QuickBooks Desktop Freezes: Running Payroll, Downloading Online Transactions, and Closing Company File - Workaround

There seems to be an issue with the Canadian version of Intuit QuickBooks where the software freezes when doing a payroll run, downloading online transactions into QuickBooks, and when closing the Company file.

The workaround is to do the following:

  1. Close your company file.
  2. Open a sample file within QuickBooks
  3. From the No Company Open window, select Open a sample file
  4. Select a sample company file
  5. Click Ok to the warning You're opening a QuickBooks Desktop sample company file.
  6. In the sample company file, go to the Employees menu > Pay Employees > Scheduled Payroll
  7. Click Start Scheduled Payroll.
  8. Click Continue.
  9. Select one of the employees listed and click Continue.
  10. Click Ok to the warning message.
  11. Click Create Pay Cheques.
  12. Click Yes to the Past Transactions message.
  13. Click Close

We have a confirmation with one of our accounting firm clients that had the problem that this "fixes" it at least for now.

Intuit Help Article: QuickBooks Desktop freezes trying to create paycheques (CA only)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday 6 August 2018

Cloud Hosting Architecture: Tenant Isolation

Cloud Vendors Compromised

Given the number of backchannels we are a part of we get to hear horror stories where Cloud Vendors are compromised in some way or get hit by an encryption event that takes their client/customer facing systems out.

When we architect a hosting system for a hosting company looking to deploy our solutions in their hosting setup, or to set up an entirely new hosting project, there are some very important elements to our configuration that would help to prevent the above from happening.

A lot of what we have put into our design is very much a result of our experiences on the frontlines with SMB and SME clients.

One blog post that provides some insight: Protecting a Backup Repository from Malware and Ransomware.

It is absolutely critical to isolate and off-site any and all backups. We've also seen a number of news items of late where a company is completely hosed as a result of an encryption event or other failure only to find out the backups were either wiped by the perps or no good in the first place.

Blog Post: Backups Should Be Bare Metal and/or Virtually Test Restored Right?

The full bare metal or virtual restore is virtually impossible at hyper-scale. Though, we've seen that the backups being done in some hyper-scale cloud vendor's environments have proven to be able to be restored while in others a complete failure!

However, that does not excuse the cloud customer or their cloud consultancy from making sure that any and all cloud based services are backed up _off the cloud_ and air-gapped as a just-in-case.

Now, to the specific point of this blog post.

Tenant Isolation Technique

When we set up a hosting solution we aim to provide maximum security for the tenant. That's the aim as they are the ones that are paying the bills.

To do that, the hosting company needs to provide a series of layered protections for tenant environments.

  1. Hosting Company Network
    • Hosting company AD
    • All hosting company day-to-day operations
    • All hosting company on-premises workloads specific to company operations and business
    • Dedicated hosting company edges (SonicWALL ETC)
  2. Tenant Infrastructure Network
    • Jump Point for managing via dedicated Tenant Infrastructure AD
    • High Availability (HA) throughout the solution stack
    • Dedicated Tenant Infrastructure HA edges
      • Risk versus Reward: Could use the above edges but …
    • Clusters, servers, and services providing the tenant environment
    • Dedicated infrastructure switches and edges
    • As mentioned, backups set up and isolated from all three!
  3. Tenant Environment
    • Shared Tenant AD is completely autonomous
    • Shared Tenant Resources such as Exchange, SQL, and more are appropriately isolated
    • Dedicated Tenant AD is completely autonomous
    • Dedicated Tenant Resources such as Exchange, SQL, and more are completely isolated to the tenant
    • Offer a built-in off-the-cloud backup solution

With the solution architected in this manner we protect the boundaries between the Hosting Company Network and the Tenant Environment. This makes it extremely difficult for a compromise/encryption event to make the boundary traversal without some sort of Zero Day involved.


We've seen a few encryption events in our own cloud services tenants. None of them have traversed the dedicated tenant environments they were a part of. None. Nada. Zippo.

Containment is key. It's not "if" but "when" an encryption event happens.

Thus, architecting a hosting solution with the various environment boundaries in mind is key to surviving an encryption event and looking like a hero when the tenant's data gets restored post clean-up.

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday 30 July 2018

Intel Server System R1208JP4OC Base System Device Driver ERROR Fix

We were asked to rebuild a cluster that had both Intel Server System R1208JP4OC nodes go blotto.

After installing Windows Server 2012 R2 the first step is to install the drivers. But, after installing the most recent Intel Chipset Drivers file we still saw the following:


Base System Device: Error

After a bit of finagling around we figured out that version shown in the above snip cleared things up nicely.


PowerShell found in our Kepler-47 Setup Guide # DRIVERS section

Thanks for reading! :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Saturday 28 July 2018

CaseWare CaseView: DRAFT Watermark Not Printing on HP LaserJet Pro M203dw

We hit a very strange situation with a newly set up HP LaserJet Pro M203dw. This was the first printer to go in to replace a HP LaserJet Pro P1606dn that was not behaving well with Windows 10 Enterprise 64-bit at an accounting firm client of ours.

It took a number of installs to get the printer to show up in Printers & Scanners for one. The rip and replace process got to be a bit tedious but we eventually got it to show there.

The catch was when the partner ticked the DRAFT option in CaseView and went to print the file the watermark was so light as to be practically invisible.

Print to PDF then to the printer and the DRAFT watermark would show up but weirdly due to the firm's logo.

Since this was a newly set up machine we tested a few other HP printers in the firm with the watermark showing just fine.

It became apparent that nothing we could do would get it to work.

So, we replaced the printer with a HP LaserJet Pro M402dw and it just worked. In fact, Windows 10 picked up the printer as soon as the USB port was plugged in to the laptop dock and set it as default.

Some observations:

  • HP LJ Pro M203dw came with a _tiny_ toner cartridge
  • HP LJ Pro M203dw has a separate toner and imaging drum a la Brother
    • We do _not_ like this setup at all
  • HP LJ Pro M402dw has a recent firmware update
    • This took some time but ran flawlessly
  • HP LJ Pro M402dw works great via Remote Desktop into the partner's laptop, Remote Desktop Session Host, and RemoteApp
    • RD EasyPrint just works with this one


We won't be supplying anymore HP LJ Pro M203dw printers. All of our firms will be getting the M402dw and our cloud clients will get this printer as a recommendation.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Thursday 26 July 2018

Hypervisor, Cluster, and Server Hardware Nomenclature (A quick what's what)

100 Level Post

When helping folks out there seems to be a bit of confusion on what means what when it comes to discussing the software or hardware.

So, here are some definitions to help clear the air.

  • NIC
    • Network Interface Card
    • The card can have one, two, four, or more ports
    • Get-NetAdapter
    • Get-NetLbfoTeam
  • Port
    • The ports on the NIC
  • pNIC
    • pNIC = NIC
    • A physical NIC in a hypervisor host or cluster node
  • vNIC
    • The virtual NIC in a Virtual Machine (VM)
    • In-Guest: Get-NetAdapter
    • In-Guest: Get-NetIPAddress
  • vSwitch
    • The Virtual Switch attached to a vNIC
    • Get-VMSwitch
  • Gb
    • Gigabit =/= Gigabyte (GB)
    • 1 billion bits
  • GB
    • Gigabyte =/= Gigabit (Gb)
    • 1 billion bytes
  • 10GbE
    • 10 Gigabit Ethernet
    • Throughput @ line speed ~ 1GB/Second (1 Gigabyte per Second)
  • 100GbE
    • 100 Gigabit Ethernet
    • Throughput @ line speed ~ 10GB/Second (10 Gigabytes per Second)
  • pCore
    • A physical Core on a CPU (Central Processing Unit)
  • vCPU
    • A virtual CPU assigned to a VM
    • Is _not_ a pCore or assigned to a specific pCore by the hypervisor!
    • Please read my Experts-Exchange article on Hyper-V especially the Virtual CPUs and CPU Cores section mid-way down it's free to access
    • Set-VMProcessor VMNAME -Count 2
  • NUMA
    • Non-Uniform Memory Access
    • A Memory Controller and the physical RAM (pRAM) attached to it is a NUMA node

A simple New-VM PowerShell script is here. This is our PowerShell Guide Series that has a number of PowerShell and CMD related scripts . Please check them out and back every once in a while as more scripts are on the works.

Think something should be in the above list? Please comment or feel free to ping us via email.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday 24 July 2018

Mellanox SwitchX-2 MLNX-OS Upgrade Stall via WebGUI and MLNX-OS to Onyx Upgrade?

Yesterday, we posted about our OS update process and the grids that indicated the proper path to the most current version.

A catch that became apparent was that there were two streams of updates available to us:

  1. image-PPC_M460EX-3.6.4112.img
  2. onyx-PPC_M460EX-3.6.4112.img
    • ETC
    • image

As can be seen in the snip above we read an extensive number of Release Notes (RN) and User Manuals (UM) trying to figure out what was what and which was which. :S

In the end, we opened a support ticket with Mellanox to figure out why our switches were stalling on the WebGUI upgrade process and why there was a dearth of documentation indicating anything about upgrade paths.

The technician mentioned that we should use CLI to clean-up any image files that may be left over. That's something we've not had to do before.

Following the process in the UM to connect via SSH using our favourite freebie tool TeraTerm we connected to both switches and found only one file to delete:

  • WebImage.tbz

Once that file was deleted we were able to initiate the update from within the WebGUI without error on both switches.

Since we had MLNX-OS 3.6.4112 already installed the next question for the tech was, "How do we get to the most current version of Onyx?"

The process was as follows:

  1. Up to MLNX-OS 3.6.4112
  2. Up to Onyx 3.6.6000
  3. Up to Onyx 3.6.8008

As always, check out the Release Notes (RN) to make sure that the update will not cause any problems especially with in-production NICs and their firmware!


Happy News! Welcome to Onyx

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service

Monday 23 July 2018

Mellanox SwitchX-2 and Spectrum OS Update Grids

We're in the process of building out a new all-flash based Kepler-64 2-node cluster that will be running the Scale-Out File Server Role. This round of testing will have several different rounds to it:

  1. Flat Flash Intel SSD DC S4500 Series
    • All Intel SSD DC S4500 Series SATA SSD x24
  2. Intel NVMe PCIe AIC Cache + Intel SSD DC S4500
    • Intel NVMe PCIe AIC x4
    • Intel SSD DC S4500 Series SATA SSD x24
  3. Intel Optane PCIe AIC + Intel SSD DC S4500
    1. Intel Optane PCIe AIC x4
    2. Intel SSD DC S4500 Series SATA SSD x24

Prior to running the above tests we need to update the operating system on our two Mellanox SwitchX-2 MSX1012B series switches as we've been quite busy with other things!

Their current OS level is 3.6.4006 so just a tad bit out of date.


The current OS level for SwitchX-2 PPC switches is 3.6.8008. And, as per the Release Notes for this OS version we need to do a bit of a Texas Two-Step to get our way up to current.



Now, here's the kicker: There is no 3.6.5000 on Mellanox's download site. The closest version to that is 3.6.5009 which provides a clarification on the above:


Okay, so that gets us to 3.6.5009 that in turn gets us to 3.6.6106:


And that finally gets us to 3.6.8008:


Update Texas Two Step

To sum things up we need the following images:

  1. 3.6.4122
  2. 3.6.5009
  3. 3.6.6106
  4. 3.6.8008

Then, it's a matter of time and a bit of patience to run through each step as the switches can take a bit of time to update.


A quick way to back up the configuration is to click on the Setup button then Configurations then click the initial link.


Copy and paste the output into a TXT file as it can be used to reconfigure the switch if need-be via the Execute CLI commands window just below it.

As always, it pays to read that manual eh! ;)

NOTE: Acronym Finder: AIC = Add-in Card so not U.2.

Oh, and be patient with the downloads as they are _slow_ as molasses in December as of this writing. :(


Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book !
Our Web Site
Our Cloud Service