Tuesday, 15 January 2019

Custom Intel X299 Workstation: Intel VROC RAID 1 NVMe WinSat Disk Score

We just finished a custom build for a client of ours in the US.

The machine is extremely fast but quiet.


After kicking the tires a bit with Windows 10 Pro 64-bit and some software installs post burn-in we get the following performance out of the Intel NVMe RAID 1 pair:

C:\Temp>winsat disk
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-ran -read -n 0'
> Run Time 00:00:00.77
> Running: Storage Assessment '-seq -read -n 0'
> Run Time 00:00:02.38
> Running: Storage Assessment '-seq -write -drive C:'
> Run Time 00:00:01.64
> Running: Storage Assessment '-flush -drive C: -seq'
> Run Time 00:00:00.45
> Running: Storage Assessment '-flush -drive C: -ran'
> Run Time 00:00:00.38
> Dshow Video Encode Time                      0.00000 s
> Dshow Video Decode Time                      0.00000 s
> Media Foundation Decode Time                 0.00000 s
> Disk  Random 16.0 Read                       1020.73 MB/s          8.8
> Disk  Sequential 64.0 Read                   3203.52 MB/s          9.3
> Disk  Sequential 64.0 Write                  1456.24 MB/s          8.8
> Average Read Time with Sequential Writes     0.090 ms          8.8
> Latency: 95th Percentile                     0.146 ms          8.9
> Latency: Maximum                             0.316 ms          8.9
> Average Read Time with Random Writes         0.058 ms          8.9
> Total Run Time 00:00:05.91

The machine is destined for a surveying company that's getting into high end image and video work with drones.

All in all, we are very happy with the build and we're sure they will be too!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 11 January 2019

Some Thoughts on the S2D Cache and the Upcoming Intel Optane DC Persistent Memory

Intel has a very thorough article that explains what happens when the workload data volume on a Storage Spaces Direct (S2D) Hyper-Converged Infrastructure (HCI) cluster starts to "spill over" to the capacity drives in a NVMe/SSD Cache with HDD Capacity for storage.

Essentially, any workload data that needs to be shuffled over to the hard disk layer will suffer a performance hit and suffer it big time.

In a setup where we would have either NVMe PCIe Add-in Cards (AiCs) or U.2 2.5" drives for cache and SATA SSDs for capacity the performance hit would not be as drastic but it would still be felt depending on workload IOPS demands.

So, what do we do to make sure we don't shortchange ourselves on the cache?

We baseline our intended workloads using Performance Monitor (PerfMon).

Here is a previous post that has an outline of what we do along with links to quite a few other posts we've done on the topic: Hyper-V Virtualization 101: Hardware and Performance

We always try to have the right amount of cache in place for the workloads of today but also with the workloads of tomorrow across the solution's lifetime.

S2D Cache Tip

TIP: When looking to set up a S2D cluster we suggest running with a higher count smaller volume cache drive set versus just two larger capacity drives.


For one, we get a lot more bandwidth/performance out of three or four cache devices versus two.

Secondly, in a 24 drive 2U chassis if we start off with four cache devices and lose one we still maintain a decent ratio of cache to capacity (1:6 with four versus 1:8 with three).

Here are some starting points based on a 2U S2D node setup we would look at putting into production.

  • Example 1 - NVMe Cache and HDD Capacity
    • 4x 400GB NVMe PCIe AiC
    • 12x xTB HDD (some 2U platforms can do 16 3.5" drives)
  • Example 2 - SATA SSD Cache and Capacity
    • 4x 960GB Read/Write Endurance SATA SSD (Intel SSD D3-4610 as of this writing)
    • 20x 960GB Light Endurance SATA SSD (Intel SSD D3-4510 as of this writing)
  • Example 3 - Intel Optane AiC Cache and SATA SSD Capacity
    • 4x 375GB Intel Optane P4800X AiC
    • 24x 960GB Light Endurance SATA SSD (Intel SSD D3-4510 as of this writing)

One thing to keep in mind when it comes to a 2U server with 12 front facing 3.5" drives along with four or more internally mounted 3.5" drives is their heat and available PCIe slots. Plus, the additional drives could also place a constraint on the processors that are able to be installed also due to thermal restrictions.

Intel Optane DC Persistent Memory

We are gearing up for a lab refresh when Intel releases the "R" code Intel Server Systems R2xxxWF series platforms hopefully sometime this year.

That's the platform Microsoft set an IOPS record with set up with S2D and Intel Optane DC persistent memory:

We have yet to see to see any type of compatibility matrix as far as the how/what/where Optane DC can be set up but one should be happening soon!

It should be noted that they will probably be frightfully expensive with the value seen in online transaction setups where every microsecond counts.

TIP: Excellent NVMe PCIe AiC for lab setups that are Power Loss Protected: Intel SSD 750 Series


Intel SSD 750 Series Power Loss Protection: YES

These SSDs can be found on most auction sites with some being new and most being used. Always ask for an Intel SSD Toolbox snip of the drive's wear indicators to make sure there is enough life left in the unit for the thrashing it would get in a S2D lab! :D

Acronym Refresher

Yeah, gotta love 'em! Being dyslexic has its challenges with them too. ;)

  • IOPS: Inputs Outputs per Second
  • AiC: Add-in Card
  • PCIe: Peripheral Component Interconnect Express
  • NVMe: Non-Volatile Memory Express
  • SSD: Solid-State Drive
  • HDD: Hard Disk Drive
  • SATA: Serial ATA
  • Intel DC: Data Centre (US: Center)

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Thursday, 10 January 2019

Server Storage: Never Use Solid-State Drives without Power Loss Protection (PLP)

Here's an article from a little while back with a very good explanation of why one should not use consumer grade SSDs anywhere near a server:

While the article points specifically to Storage Spaces Direct (S2D) it is also applicable to any server setup.

The impetus behind this post is pretty straight forward via a forum we participate in:

  • IT Tech: I had a power loss on my S2D cluster and now one of my virtual disks is offline
  • IT Tech: That CSV hosted my lab VMs
  • Helper 1: Okay, run the following recovery steps that help ReFS get things back together
  • Us: What is the storage setup in the cluster nodes?
  • IT Tech: A mix of NVMe, SSD, and HDD
  • Us: Any consumer grade storage?
  • IT Tech: Yeah, the SSDs where the offline Cluster Storage Volume (CSV) is
  • Us: Mentions above article
  • IT Tech: That's not my problem
  • Helper 1: What were the results of the above?
  • IT Tech: It did not work :(
  • IT Tech: It's ReFS's fault! It's not ready for production!

The reality of the situation was that there was live data sitting in the volatile cache DRAM on those consumer grade SSDs that got lost when the power went out. :(

We're sure that most of us know what happens when even one bit gets flipped. Error Correction on memory is mandatory for servers for this very reason.

To lose an entire cache worth across multiple drives is pretty much certain death for whatever sat on top of them.

Time to break-out the backups and restore.

And, replace those consumer grade SSDs with Enterprise Class SSDs that have PLP!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Monday, 7 January 2019

Security: Direct Internet Connections for KVM over IP devices such as iLO Advanced, iDRAC Enterprise, Intel RMM, and others = BAD

While discussing firmware and updating firmware this situation, experienced a number of years ago vicariously, came to light:

ISP: Excuse me sir, but we have a huge volume of SPAM coming out of [WAN RMM IP]
Admin: Huh?
ISP: We are seeing huge volumes of SMTP traffic outbound from [WAN RMM IP]
Admin: Oh?
* Checks non-existent documentation
Admin: Um, is that IP assigned to us?
ISP: Yes sir, along with [WAN SSL IP for internal services]
Admin: Hmmm …
Admin: Oh, wait, I think I know … [unplugs iLO/iDRAC/RMM from switch connected to ISP modem]
Admin: Is it better now?
ISP: Oh yeah, what did you do?
Admin: Oh, I fixed it.

One should never plug a RMM/iLO/iDRAC type device directly into the Internet right?

We probably blogged about this in the past, but it definitely bears repeating as we still encounter situations where the devices are plugged directly in to the Internet!

Happy New Year everyone! :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Saturday, 22 December 2018

VS Code for PowerShell: IntelliSense Code Highlighting Not Working?

This is what VS Code looks like on _all_ machines used in the shop and elsewhere but one:


Note the great colour coding going on. That's IntelliSense working its magic to render the code in colour!

The errant machine looks like this:


Note the distinct lack of colour coding going on. :(

We removed and re-installed VS Code, the Extensions, and anything else we could with no change.

After an ask on a PowerShell list a suggestion was made to check the VS Code theme.

Sure enough, on the problematic one:


VS CODE: Dark (Visual Studio) Colour Theme

While on a functioning setup:


VS CODE: Dark+ (default dark)

For whatever reason, the latter colour theme seems to break things.

Now that we have that figured out, we can move on to coding! :D

Merry Christmas and a Happy New Year's to everyone!

Hat Tip: Shawn Melton MVP

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Wednesday, 12 December 2018

Intel Technology Provider for 2019

We just received word of our renewal for the Intel Technology Provider program:


We've been system builders since the company began in 2003 with my building systems for more than a decade before that!

One of the comments that gets made on a somewhat frequent basis is something along the lines of being a "Dinosaur". ;)

Or, this question gets asked quite a lot, "Why?"

There are many reasons for the "Why". Some that come off the top are:

  • We design solutions that meet very specific performance needs such as 150K IOPS, 500K IOPS, 1M IOPS and more
  • Our solutions get tested and thrashed before they ever get sold
    • We have a parts bin with at least five figures worth of broken vendor's promises
  • We have a solid understanding of component and firmware interactions
  • Our systems come with guaranteed longevity and performance
    • How many folks can say that when "building" a solution in a Vendor's "Solution Tool"?
  • We avoid the finger pointing that can happen when things don't live up to muster

The following is one of our lab builds. A two node Storage Spaces Direct (S2D) cluster utilizing 24 Intel SSD DC-4600 or D3-4610 SATA series SSDs flat meaning no cache layer. The upper graphs are built in Grafana while the bottom left is Performance Monitor watching the RoCE (RDMA over Converged Ethernet via Mellanox) and the bottom right is the VMFleet WatchCluster PowerShell.


We just augmented the two node setup with 48 more Intel SSD D3-4610 SATA SSDs for the other two nodes and are waiting on a set of Intel SSD 750 series NVMe PCIe AiCs (Add-in-Card) to bring our 750 count up to 3 per node for NVMe cache.

Why the Intel SSD 750 Series? They have Power Loss Protection built-in. Storage Spaces Direct will not allow any cache devices hold any data in the storage's local cache if it is volatile. What becomes readily discoverable is that writing straight through to NAND is a very _slow_ process relative to having that cache power protected!

We're looking to hit 1M IOPS flat SSD and well over that when the NVMe cache setup gets introduced. There's a possibility that we'll be seeing some Intel Optane P4800X PCIe AiCs in the somewhat near future as well. We're geared-up for a 2M+ run there. :D

Here's another test series we were running to saturate the node's CPUs and storage to see what kind of numbers we would get at the guest level:


Again, the graphs in the above shot are Grafana based.

The snip below is our little two node S2D cluster (E3-1270v6, 64GB ECC, Mellanox 10GbE RoCE, 2x Intel DC-4600 SATA SSD Cache, 6x 6TB HGST SATA) pushing 250K IOPS:


We're quite proud of our various accomplishments over the years with our high availability solutions running across North America and elsewhere in the world.

We've not once had a callback asking us to go and pick-up our gear and refund the payment because it did not meet the needs of the customer as promised.

Contrary to the "All in the Cloud" crowd there is indeed a niche for those of us that provide highly available solution sets to on-premises clients. Those solutions allow them to have the uptime they need without the extra costs of running all-in the cloud or hybrid with peak resources in the cloud. Plus, they know where their data is.

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Tuesday, 11 December 2018

OS Guide: Slipstream Updates Using DISM and OSCDImg *Updated

We have found out that we need to have the May Servicing Stack Update (SSU) KB4132216 _and_ the latest SSU which is currently KB4465659 in the Updates_WinServ folder we drop the Cumulative Update into for the Windows Server 2016 slipstream run.


Note that the current version of the script points to Server 2019. Please use that as a base to tweak and create a set of folders for Windows Server 2016 and Windows 10 updates.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Thursday, 6 December 2018

Error Fix: Trust Relationship is Broken

Here's a quick post on fixing a broken trust situation when the local administrator username and password is a known commodity.

On Windows 7:

  1. Windows Explorer
  2. Right click My Computer/This PC --> Properties
  3. Change settings for Computer Name
  4. Change button
  5. Domain: Setting Now: DOMAIN.LOCAL
    1. Change to DOMAIN (delete .Local)
    2. Credential
  6. Reboot

That process will fix things for Windows 7 unless PowerShell is up to date then for all others including it:

  1. Log on with local admin user
  2. Reset-ComputerMachinePassword -Credential DOMAIN\DomainAdmin
  3. Log off
  4. Log on with domain user account

That's it.

If you know any other methods, especially for situations where the local admin username and password is an unknown or all local admin accounts are disabled feel free to comment or ping!

Thanks for reading. :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday, 3 December 2018

A word of caution: Verify! Everything!

We get to work with a whole host of clients and their industries but as a contractor we also get to work with a wide variety of IT Pros and IT Companies.

Many times we get involved in a situation that's an outright pickle.

Something has gone sideways and the caller is looking for some guidance, some direction, and a bit of handholding because they are at a loss.

Vendor Blame Game

Some of those times the caller is in the middle of a tongue wagging session between a set of vendors blaming the other for the pickle.

We were in that situation back in the day when a Symantec BackupExec (BUE) solution failed to restore. The client site had two HP DAT tape libraries with BUE firing "All Good" reports.

We found out those reports were bad when the client's main file server went blotto.

We were in between the storage vendor and their products, Symantec, and HP. It was not a pretty scene at all. In the end, it was determined _by us_ that BUE was the source of the problem because it did not do any kind of verify on the backups being written to tape despite the setting being there to do so.

We were fortunate that we had multiple redundant systems in place and managed to get most of the data back except one partner's weeks' worth of work. We had to build out a new domain though.

So, why the blog post?

Because, it's still happening today.

Verify, Verify, and Verify Again

We _highly suggest_ verifying that all backup products and backup services are doing what they say they are doing.

If the service provider charges for a test failover then do it anyway. Charge the fee back to the client because once the process has run successful or not things are in a better place either way.

Never, _ever_ walk into a disaster recovery situation without ever having tested the products that are supposed to save the client's business. Period.

Yeah, there are times where something may happen before that planned failover. That's not what we're talking about here.

What we are after is testing to make sure that the vendor's claims are indeed true and that the solution set is indeed working as planned.

The last place we need to find out that our client's backups are _not_ working is when their servers, virtual machines, cloud services vendors have gone blotto.

Out of the Cloud Backup

We always make sure we have a way to back up any cloud vendor's services to our client's site. It just makes sense.

Our trust is a very fickle thing.

When it comes to our client's data we don't give our full trust to any vendor or solution set.

We _always_ test the backup and recovery processes so that we're not blindsided by things not going as planned or any "hidden fees" for accessing the client's data in a disaster recovery situation.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 30 November 2018

Some Thoughts on the Starwood/Marriott Reservations Database Breach

Note: This post will _not_ be a happy one.

First: The announcement page: Starwood Guest Reservation Database Security Incident Marriott International

That page is garbage, rubbish, and so much more. It exemplifies today's epidemic of spin instead of truth and responsibility for an error that harms others.



"Marriott values our guests and understands the importance of protecting personal information."

That is a complete crock of male bovine excrement.

Especially when we look to the following:


"After receiving the internal security alert, we immediately engaged leading security experts to help us determine what occurred."

Okay, so just when did that security alert come in?


"On September 8, 2018, Marriott received an alert from an internal security tool regarding an attempt to access the Starwood guest reservation database."

Cool, so things look like they got caught really quick right? That seems to be the way this article is written right?



"Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014."

Let's rephrase all of the above shall we:

Marriott: We let unauthorized access to our reservation database happen for FOUR YEARS.

Yeah, "We at Marriott/Starwood really care about your data/PII." Really. All said with a smile.


In our case, the CC used for our various stays has expired very recently. So, we should be protected that way. And, to further protect things we use KeePass with unique passwords for any and all online resources with unique e-mail addresses set up for each of them (we're doing this more and more).

Suffice it to say, if the Marriott really cared about risk to our PII (Personally Identifiable Information) the reservations system would have been segmented with designated access and no Internet access. We've been applying our knowledge of network setup to segment our client's networks for years. Especially with PCI scans being somewhat generic and different depending on what org is running the scans.

Oh, and note that credit card information was stored in there too. How in the world did that pass muster with PCI scans?


LMHYWT (Let me help you with that) " … two components needed to decrypt payment card numbers and Marriott not able to rule out both were taken."

Tis a sad day indeed when spin and lawyer speak win out over a true "Mea Culpa" we really *insert expletive here* up.

This Marriott incident is a gross breach of trust and it is time companies be held liable for such.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Tuesday, 13 November 2018

New PowerShell Guides and DISM Slipstream Process Updated

We've added two new PowerShell Guides:

We've also updated the page with some tweaks to using DISM to update images in the Install.WIM in Windows Server. The process can also be used to slipstream both Servicing Stack Updates (SSUs) and Cumulative Updates (CUs) for both Windows Server and Windows Desktop.

Thanks for reading! :)

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Tuesday, 6 November 2018

Apple MacBook Pro: Upgrading OS X Snow Leopard 10.6 to El Capitan 10.11 with 2 Factor Authentication On

Wow, what an adventure.

We have a MacBook Pro 13" early 2009 laptop here in the shop that has been sitting idle for a while.

We installed a new SSD in the unit and bumped the RAM up to 8GB.

Then, on to installing a fresh copy of Snow Leopard 10.6 via the installer DVD.

We needed to use the Disk Utility in the installer to set up a partition prior to being allowed to install the OS.

Once in, we went through the updates process.

Then, on to upgrading OS X to El Capitan 10.11.

What a pain. Because we are on what is essentially an ancient OS version all of the apps were uncooperative due to the 2 Factor Authentication (2FA) that is enabled on our Apple ID.

Safari would not work with Apple's sites for authentication either due to SSL compatibility issues.

Searching meant using buckshot terms to try and figure out exactly what needed to be done to allow the upgrade to proceed in the App Store.

The long and short of it found here is to do the following:

  1. Open Safari and navigate to this Apple Support page: How to upgrade to OS X El Capitan
  2. Scroll down to Step 4 and click on the Get El Capitan link to bring up the App Store
  3. Click the Get button in the store
  4. On a trusted device such as an iPhone
    1. Tap into Settings --> Your Name --> Password & Security
    2. Tap on the Get Verification Code at the bottom of that page
  5. On the MacBook Pro enter the Apple ID and the Password
    1. YourAppleID@YourDomain.Com
    2. YourAppleIDPassword123456
      • 123456 = Verification Code

The verification code gets tagged on to the password at the end as above.

It's a monster weighing in at 6.21GB so a good fast connection should be used to download this one!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 2 November 2018

Veeam Error: Unable to allocate processing resources. Error: On-host proxy [ServerName] requires upgrade before it can be used.

We rebooted one of our Hyper-V hosts that has a number of VMs hosted on it.

The Veeam setup was just completed with the VMs set up right after.

The host was having some network difficulties as it turned out that one of the two ports in the host LBFO Management team was plugged into the VM's switch instead of our setup network.

Once corrected and a reboot later and Veeam was throwing an error due to "Server Not Found".

We had set up the backup based on the IP address the server had. Low and behold that address had changed after the reboot.

So, we set up a new Managed Server based on the new IP and updated the Backup Job.

We fired the backup but it failed:

11/2/2018 4:22:55 PM :: Unable to allocate processing resources. Error: On-host proxy [ServerName] requires upgrade before it can be used. 

After some searching on Veeam's forums this post came up: On-host proxy requires upgrade

After reading through the forum thread it was the very last post that got things going for us:

  1. Restart Veeam Backup Service
  2. Restart Veeam Broker Service
  3. Fire the backup
  4. Success!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service

Monday, 15 October 2018

Server 2019 and ADDS: FRS Not Supported - Migrate to DFSR

We went to DCPromo a newly stood up Windows Server 2019 VM into an existing domain and it would not let us do so.

Suffice it to say, we needed to migrate File Replication Service (FRS) to Distributed File System Replication (DFSR).

The process is actually quite simple so long as Active Directory and replication are healthy.

Ned Pyle has an article that has three methods in it:

We followed method one as a just-in-case during business hours. No hiccups were experienced and once done:


A simple way to keep an eye on things is to open File Explorer and plug the following in the Address Bar: \\Domain.Com

So long as DNS is healthy and SYSVOL and NETLOGON are there the process is humming along as expected.

NOTE: Make sure a backup is taken of the PDCe and a System State as well!

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 14 September 2018

WARNING: Edge (Sync) Ate All Favourites - Favourites Gone!

We've been doing _a lot_ of work setting up a Grafana/InfluxDB/Telegraph monitoring and history system lately.

The following is our custom Kepler-64 Storage Spaces Direct 2-node cluster being tracked in a Grafana Dashboard that we've customized:


Grafana graphs, PerfMon RoCE RDMA Monitoring, and VMFleet Watch-AllCluster.PS1 (our mod)

Needless to say, a substantial number of links to various sites about running the above setup on both Windows and Ubuntu were lost after something seemingly went wrong. :(

Edge (Sync ?) Hiccups then Pukes

As we were quite busy throughout the day and Edge was being very uncooperative we started using Firefox for most of the browsing throughout the day.


Edge: Favourites Bar Shortcut Count Drastically Trimmed

The first response once things started misbehaving should have been was to use the Edge Favourites Backup feature and get them out!

When we opened Edge later in the day this is what we were greeted with:


Edge: Favourites Bar Shortcuts Gone!

As a small caveat one of the behaviours with Edge has been for it to either go unresponsive requiring a Task Manager Kill or when another Edge browser session was opened for it to not allow Paste or Right Click in the Address Bar or any Favourites, History, or other buttons to be shown.

So, Task Manager Kill? Nope. They were not there.

Log off and back on again? Nope.

Reboot the machine? Nope.

Both the Favourites Bar content and _all_ of our Favourites were gone.

Back Them Up!

During the above day when things started to misbehave the next step _should have been_ to grab one of the tablets they were syncing to and run a backup process without allowing the tablet to connect to WiFi and sync! Ugh, hindsight is 20/20. :P

And, much like the advice we always start out with when training users on the use of Office products the very first step they need to take is to _save_ their work before doing anything! And, once the work is done, to _save_ their work. Well, that advice is something that we will be taking from now on with regards to Edge and Favourites.

After a big day of Favourites building a backup should be taken.

So, Where Are Those Favourites?

Why, oh why do software vendors move my cheese?

In this case, the original location for those Favourites when using IE back in the day was OneDrive. If there was a hiccup somewhere between the number of different clients OneDrive Sync would append the name of the machine to the conflicting shortcuts and we'd be left with doing a quick Search & Destroy post mortem. No biggie. Not so with Edge.

Thanks to Michael B. Smith via a list we were pointed to:

The location that Edge stores those favourites is here:

  • Stored under %LOCALAPPDATA%\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe\AC\MicrosoftEdge\User\Default\DataStore\Data\nouser1\120712-0049\DBStore
  • Database Name: Spartan.edb

Change the somewhat hidden dropdown to Favourites and:


Some Edge Favourites Post Backup Import

What Does All This Mean? Edge Bug

It means that there's a serious bug somewhere in the Edge setup with Data Loss being one possible result.

It means that, for now one needs to run the Edge Export process to back those Favourites up after a serious day of adding to that list!

  1. In Edge click the Favourites/History/Downloads button
    • image
  2. Click the Favourites Star if they are not shown as above
  3. Click the Gear
  4. Click the Import from another browser button
    • image
  5. Click the Export to file button
    • image
  6. Choose a location and give the file a name
    • We drop ours in OneDrive to keep it backed up
    • image
    • Naming convention: DATE-TIME-Location.HTML

The above process will at least help mitigate any choke in the Edge Favourites setup that may happen.

Warning Note

IMPORTANT NOTE: Edge does not have any kind of parsing structure for the import process.

We cannot pick and choose what to import, and, if there are still Favourites _in the database_ they may disappear/be deleted when importing!

If the bulk of the Favourites are still there then an alternative to a wholesale import would be to open the backup .HTML page and click on the needed links and Favourite them again. *sigh*


What does all of this mean?

Considering that we've lost data there's a very serious problem here. In our case, we're talking about a very long and full day's worth of bookmarks/favourites gone. :(

For now, it means back those favourites up _a lot_ when doing critical work that requires knowledge keeping!

Oh, and we need to set aside some time to delve into the NirSoft utility linked to above to see if there are features in there to help mitigate this situation.

Philip Elder
Microsoft High Availability MVP
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service