Thursday, 23 July 2015

User Profile Tip: Windows Explorer Favorites

Some of us are big on redirecting most folders to the server.

Some of us have learned the hard way to leave AppData and its contents alone on the local desktop. ;)

By default we redirect Desktop, My Documents/Documents, and Favorites.

We recently did a profile refresh using our Event ID 1511, 1515 Profile loss to TEMP method for one of our clients. Their profile had become hopelessly corrupted.

Now, to date we’ve not encountered too many folks that avidly use the Windows Explorer Favorites (pinning):

image

The above is a snip of my own Windows Explorer pins.

Okay, so we don’t redirect that folder and we’ve not really had to migrate those links before.

That begged the question: Where the chicken are they?!?

Our search foo both on the local machine, via AppData where we thought they should be, and via Internet turned up nothing but one clue: %UserProfile%\Links.

image

Bingo!

We copied the files from the UserProfile–OLD folder into their new UserProfile\Links folder and they were happy to have them back.

We’ve since added this step to our checklist and will pay a bit more attention to our client’s environments to see if we need to redirect %UserProfile%\Links to save on some time later on.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Thursday, 16 July 2015

A Brief on Storage Spaces

This is a repost of a comment made on an Experts Exchange question.

***

Storage Spaces (SS) is pretty unique. It's driven by Microsoft's need to run data centres full of storage but not foot the bill for huge SAN arrays.

There is a Windows Server Catalogue page of approved hardware for a SS solution.

Our preference is for Quanta and DataON for non-Tier 1 solutions. We've done a lot of testing, and have deployed to on-premises and data centre based clients, prior to being confident in our solution set. That get's expensive _fast_ as we do not deploy anything we've not tested first.

As can be seen, Dell has their products on the SS list and Microsoft chose them for their Cloud Platform System.

A lot of planning and foresight has gone into SS especially with the upcoming Windows Server 2016 feature set (should they all make it into RTM). There are a lot of big storage vendor aimed features that will allow us to deploy SS solutions sets at a fraction of the $/GB cost of the big box vendors.

As an FYI we have a solution set for IaaS vendors that has been in production for close to two years now that works flawlessly. Backend is 10GbE to start with 40Gb SMB Direct (RDMA) and 56Gb SMB Direct (RDMA) over InfiniBand as options.

The solution can scale from 60 drives (4TB, 6TB, or 8GB NearLine SAS) in one 60-bay JBOD to three JBODs, four, or more. With three or more we get enclosure resilience. A full enclosure of drives can fail and SS keeps moving along until that enclosure is brought back up or replaced.

Storage Spaces' cost per IOP, cost per GB/Second (throughput), and cost per GB can't be compared.

Check out Storage Spaces Direct (S2D). Our v2 data centre product will be based on S2D with an all flash option providing _millions_ of IOPS to tenants. Storage fabric via RoCE while storage to compute would be RDMA via InfiniBand.

***

Further Reading:

There is a very strong economic motivation to get Storage Spaces right for Microsoft and for us. We’ve staked our company direction on Microsoft’s direction with storage while Microsoft’s driver is reducing the overall cost of storage in their Azure data centres.

We believe our Cloud Services Provider data centre backend products are some of the best available and Storage Spaces is a critical piece of the puzzle.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Tuesday, 14 July 2015

Third Tier: Be the Cloud – Client Facing Environment Demo & Chat

Tomorrow at 1700MST is my regular monthly Third Tier chat.

Since we’ve spent a fair amount of time on our Be the Cloud product (IaaS to your clients) backend setup, now it’s time to have a look at what the BtC clients would be working with.

The client environment is based on our SBS (Small Business Solution) that provides a Small Business Server like IT experience but greatly improved.

  • Remote Web Portal access
  • Remote desktop access
  • Exchange services (OWA, EAS, OA, Public Folders, ETC)
  • RemoteApp based application access
  • SharePoint CMS for Check Out/In and Versioning in Documentation

Our goal in designing our SBS was to give our clients as close to a Small Business Server experience as possible.

I believe we’ve done that in spades and after our chat tomorrow I believe you’ll agree. :)

When: 1700MST

Where: Third Tier Join.me

A recording of the chat, barring any technical difficulties, would be posted on the Third Tier blog at a later date.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Friday, 10 July 2015

WebDAV Download Error: 0x800700DF File Size Exceeds Limit Allowed

We set up a WebDAV based file repository for some of our Cloud deployments.

When we did so we hit the following:

image

An unexpected error is keeping you from copying the file. If you continue to receive this error, you can use the error code to search for help with this problem.

Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved.

Search results indeed bring up a plethora of results all with the same fix:

  1. Regedit
  2. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
    • image
  3. FileSizeLimitInBytes
    1. Set to the following Decimal: 4294967295
    2. Click OK
  4. Restart the WebClient service
  5. Refresh the Windows Explorer WebDAV window
  6. Authenticate again
  7. Voila!
    • image

We’re now in business.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Monday, 6 July 2015

Happy Monday Because …

Kittens! :)

image

Momma’s Momma was a Blue Point Siamese with an unknown Papa. In this case the kitten’s Papa is our male black and white cat “Two Face”.

image

The black and grey tabby and the calico are females while the fully black and white one, like the Papa, and the cream coloured one are males.

There are many things in life that can bring a smile to our face. This is most certainly one of them.

Happy Monday everyone and thank you for reading! :D

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Tuesday, 30 June 2015

Hyper-V Virtualization 101: Hardware Considerations

When we deploy a Hyper-V virtualization solution we look at:

  1. VM IOPS Requirements
  2. VM vRAM Requirements
  3. VM vCPU Requirements

In that order.

Disk Subsystem

The disk subsystem tends to be the first bottleneck.

For a solution with dual E5-2600 series CPUs and 128GB of RAM requiring 16 VMs or thereabouts we'd be at 16 to 24 10K SAS drives at the minimum for this setup with a 1GB hardware RAID controller (non-volatile or battery backed cache).

RAID 6 is our go-to for array configuration.

Depending on workloads one can look at Intel's DC S3500 series SSDs or the higher endurance DC S3700 series models to get more IOPS out of the disk subsystem.

RAM

Keep in mind that the physical RAM is split between the two processors so one needs to be mindful of how the vRAM is divvied up between the VMs.

Too much vRAM on one or two VMs can cause the physical RAM to be juggled between the two physical CPUs (NUMA).

Note that each VM’s vRAM gets a file written to disk. So, if we are allocating 125GB of vRAM to the VMs there will be 125GB of files on disk.

CPU

And finally, each vCPU within a VM represents a thread to the physical CPU. For VMs with multiple vCPUs every thread (vCPU) for that VM needs to be processed by the CPU's pipeline in parallel. So, the more vCPUs we assign to a VM the more the CPU's logic needs to juggle the threads to have them processed.

The end result? More vCPUs is not always better.

I have an Experts Exchange article on Some Hyper-V Hardware and Software Best Practices that should be of some assistance too. In it I speak about the need to tweak the BIOS settings on the server, hardware configurations to eliminate single point of failures (SPFs), and more.

Conclusion

In the end, it is up to us to make sure we test out our configurations before we deploy them. Having a high five figure SAN installed to solve certain performance “issues” only to find out they exist _after_ the fact can be a very bad place to be in.

We test all aspects of a standalone and clustered system to discover its strengths and weaknesses. While this can be a very expensive policy, to date we’ve not had one performance issue with our deployments.

Our testing can also be quite beneficial to present an IOPS and throughput reports based on sixteen different allocation sizes (hardware and software) to our client _and_ the vendor complaining about our system. ;)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book

Wednesday, 17 June 2015

What's up, what's been happing, and what will be happening.

Wow, it's been a while hasn't it? :)

We've been _very_ busy with our business as well as a Cloud services start-up and Third Tier is keeping me hopping too.

I have a regular monthly Webinar via Third Tier where we've been spending time on the Third Tier product called "Be the Cloud". It is a solution set developed to provide a highly available backend for client facing services based on our SBS (Small Business Solution).

We, that is my family, took a much needed break in May for a couple of weeks of downtime as we'd not had any pause for a good 18 months prior. We were ready for that.

So, why the blogging pause?

There are a number of reasons.

One is that I've been so busy researching and working on new things that there hasn't been a lot of time left over for writing them all out. Ongoing client needs are obviously a part of that too.

Another had to do with waiting until we were okay to publish information on the upcoming Windows Server release. We Cluster MVPs, and others, were privileged to be very deeply involved with the early stages of the new product. But, we were required to remain mum. So, instead of risking anything I decided to hold off on publishing anything Server vNext related.

Plus, we really didn't have a lot of new content to post since we've about covered the gamut in Windows Server 2012 RTM/R2 and Windows Desktop. Things have been stable on that front other than a few patch related bumps in the road. So, nothing new there meant nothing new to write about. ;)

And finally, the old grey matter just needed a break. After all, I've been writing on this blog since the beginning of 2007! :)

So, what does this mean going forward?

It means that we will begin publishing content on a regular basis again once we've began serious work with Windows Server vNext.

We have a whole host of lab hardware on the way that has a lot to do with what's happening in the new version of Windows Server that ties into our v2 for Be the Cloud and our own Cloud services backend.

We're also establishing some new key vendor relationships that will broaden our solution matrix with some really neat new features. As always, we build our solution sets and test them vigorously before considering a sale to a client.

And finally, we're reworking our PowerShell library into a nice and tidy OneNote notebook set to help us keep consistent across the board. This is quite time consuming as it becomes readily apparent that many steps are in the grey matter but not in Notepad or OneNote.

Things we're really excited about:
  • Storage Spaces Direct (S2D)
  • Storage Replication
  • Getting the Start Menu back for RDSH Deployments
    • Our first deployment on TP2 is going to happen soon so hopefully we do indeed have control over that feature again!
  • Deploying our first TP2 RDS Farm
  • Intel Server Systems are on the S2D approval list!
    • The Intel Server System R2224WTTYS is an excellent platform
  • Promise Storage J5000 series JBODs just got on the Storage Spaces approved list.
    • We've had a long history with Promise and are looking forward to re-establishing that relationship.
  • We've started working with Mellanox for assistance with RDMA Direct and RoCE.
  • 12Gb SAS in HBAs and JBODs rocks for storage
    • 2 Node SOFS Cluster with JBOD is 96Gbps of aggregate ultra-low latency SAS bandwidth per node!
  • NVMe based storage (PCIe direct)
The list could go on and on as they come to mind. :)

Thank you all for your patience with the lack of posting lately. And, thank you all for your feedback and support over the years. It has been a privilege to get to know some of you and work with some of you as well.

We are most certainly looking forward to the many things we have coming down the pipe. 2015 is shaping up to be our best year ever with 2016 looking to build on that!

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book