Thursday, 26 June 2014

CRITICAL: Seagate 1200 SSD Firmware Update Required for 2012 R2 Storage Spaces

We got hit with this today:

image

  • Get-PhysicalDisk

Since this was our second run at standing up this Scale-Out File Server cluster with things not working as expected we began to dig in.

During a Space creation process the symptom of the Volume Format phase hitting an error happened. We jumped into PowerShell to poll the disks right then and saw the above.

The Disks:

image

  • Get-PhysicalDisk | where MediaType -eq SSD | ft Model,FirmwareVersion -AutoSize

The PowerShell to get the above information along with the serial numbers:

  • Get-PhysicalDisk | where MediaType -eq SSD | ft Model,FirmwareVersion,SerialNumber -AutoSize

Go to Seagate's Support site and choose the Download Finder.

image

Enter the serial number, just the first set of digits before the four 0000 pattern as the number was repeated twice for us, your country, and then under Certificate click on the Click here link and _not_ the Email Me link.

image

The highlighted link downloads the actual firmware ZIP file.

A copy of Seagate's SeaTools is required to update the drive's firmware.

If the SSDs are in a cluster setting, as they are here, make sure to properly drain the nodes and shut down the cluster (TechNet). Then shut down all nodes but one.

Run the firmware update and reboot that node. Bring the Cluster back online and fire up the other nodes one by one.

NOTE: We have _not_ tested this firmware yet.

A Microsoft Forum post brought us into the right direction: Clustered Storage space degraded and SSD disks "starting".

Remember how it has been mentioned ad nauseam on this blog about how we are very careful about testing our deployments? Well, in this case we were a part of the planning phases for this cluster but did not have access to these particular SSDs prior to deploying in a Michigan Data Centre.

This situation sure brings home the point that we _always_ need to test our setups before deploying them at client sites or on behalf of our clients.

EDIT: Brain was five steps ahead of fingers so the "Firmware Update" in the title never made it into the original post! :)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Monday, 23 June 2014

Third Tier Brain Explosion: Our New SBS!

I will be presenting along with our Third Tier team as a Pre-Day Brain Explosion (BE) for the GFIMax conference in Orlando, Florida on the afternoon of September 7th and the morning of the 8th.

We are splitting things into two days to give folks a chance to both digest all of the content but also to allow BE attendees the chance to meet and chat with us.

Cost for the Pre-Day BE is a very reasonable $99.

Registration for the event can be done on the Third Tier Portal under the Purchase Support option (windowed scroll bar in there that's a bit difficult to see) or on GFI's site (Pre-Day Landing Page).

The content I will be working with will be the technical aspects of delivering our new SBS.

What is that you might ask?

S = Small

B = Business

S = Solution

It is our very complete solution set that we have been deploying to our client sites for about six months to a year now.

  • RWW/RWA Replacement
    • RDWeb and RDGateway
  • Exchange/OWA/EAS
    • Exchange 2013 CU4 (SP1)
  • SharePoint
    • SharePoint Foundation 2013
  • Remote Desktop Services
    • Remote Desktop Session Host and RemoteApps
    • RD Endpoint access

Our last ASP SMB Kitchen subscriber chat was spent walking through SBS and how it is essentially seamless to our clients. One WAN IP is all that is required just as it was with Small Business Server.

The beauty of the solution is the simplicity with which things change for our end users: Virtually not at all. :)

We supply an on-premises solution that gives everything Small Business Server has given them. Plus, we clearly demonstrate that our product is on par with or better than any Cloud based solution out there just as SBS has been to small businesses for the last ten or more years.

Looking for that on-prem replacement? Please subscribe to our ASP Project as our monthly chats and content supplied by our team will facilitate that search.

Want the technical pearls to deploying this kind of solution for your clients? Then please do register for the Brain Explosion and wear a brain bucket! :)

Thanks for reading.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Thursday, 19 June 2014

Cluster Node BIOS and Hardware Configuration Tips

Here are some tips for configuring the nodes in a Hyper-V or Scale-Out File Server failover cluster.

Staggered Start

image

  • Stagger the node start times to give storage enough time to come online

One of the important tests to run when working with a new JBOD unit or storage shelf is to time the unit's power-up to production ready time.

In the case of the Intel JBOD2224S2DP with 24 Seagate Savvio spindles installed the staggered start of each disk group actually takes a bit of time to process. So, we set our Grizzly Pass servers to start at 150 seconds and up for each storage node and then 210 and up for each Hyper-V server node.

Processor C States

image

  • Processor C States are set to Disabled

Why the C States interfere with storage access and transfer abilities is a bit of a mystery but they do need to be turned off.

Also, take careful notes of all BIOS settings set up on one node and make sure to set all other node's BIOS settings to the same ones.

Performance Setting

Pedal to the metal:

image

Make sure the performance profiles are set to maximum!

We need all available power at all times.

PXE Boot

We suggest turning PXE Boot and the NIC's option ROM off.

image

Confirm in the Boot Order manager that there are no NICs available for boot. If any show up there make sure to disable them.

While in the NIC configuration settings one can make a note of the NIC MAC addresses to help with configuration further on into the node setup process.

Reboot and OS Boot Checks

We've seen some issues with OS Boot Watchdog Timers:

image

Most modern BIOS firmware should be able to sense that Windows Server 2012 R2 has booted and settled into its working role. But, we have seen cases in older BIOS versions where the server would mysteriously reboot after 10 minutes (we timed it after noticing that the reboots were happening close to the same time).

Boot Options

And finally, for now we are not enabling EFI Optimized Boot options on our nodes:

image

We need to run some tests with 2012 R2 U1 before we commit to the new setup in production.

Make sure to disable the USB Boot Priority or that OS Load flash drive will be booted to on node reboots!

Cluster Node Configuration

When it comes to setting up the node specifications one needs to choose carefully.

This again is one area where Intel Server Systems outshine Tier 1.

The Intel Server System R1208JP4OC is an Intel Xeon Processor E5-2600 v1/v2 series 1U server with a single socket. The big plus to this server is the ability to have two SAS HBAs and two 10GbE or 56Gb InfiniBand cards installed.

As far as we know no Tier 1 single socket 1U server shares this ability anywhere. So, we get a really good performing server at an excellent entry level price point.

We make sure to design our clusters around their intended purpose at the storage, Scale-Out File Server, and Hyper-V levels.

With these tools that are included in Windows Server 2012 RTM/R2 we have an amazing ability to build a single asymmetric cluster (2 nodes and 1 JBOD) at a very lucrative price that fits in really well at the SMB level (12-13 seats plus - yes, we sell clusters into SMB) right up to a million IOPS plus transaction oriented cluster.

Remember that consistency in hardware, firmware, settings, and drivers is the key to cluster performance and stability.

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Wednesday, 18 June 2014

SOFS, Storage Spaces, and a Big Thanks to the Intel Technology Provider Program!

What was once the Intel Channel Program and now ITP has been very generous to us over the years.

We make no bones about our support of both the program but also the excellent Intel Server Systems and Intel Storage Systems that we deploy on a regular basis.

With the introduction of the Grizzly Pass product line we received a product that was bang-on with Dell, HP, and IBM feature for feature, construction quality for construction quality, with two very significant advantages to the Intel product:

  1. Flexibility
    • We can utilize an extensive tested hardware list to custom configure our server and storage systems to order way beyond what Tier 1 offers even in their Build-to-Order programs.
    • We are able tune our configurations to very specific performance needs.
  2. Support
    • The folks on the other end of the support line are second to none. Some of the folks we have worked with have been our contact for cases over the last ten years or more! These folks know their stuff.
    • Advanced no questions asked warranty replacement for almost all products is also a huge asset.

This is the product stack we have been working on lately for our Proof-of-Concept testing for Scale-Out File Server failover clusters, Hyper-V over SMB via 10GbE provided for by two NETGEAR XS712T 10GbE switches, and Storage Spaces performance testing.

image

The top two servers are Intel R1208JP4OC 1U single socket servers supporting the Intel Xeon Processor E5-2600 v1/v2 series CPUs. They have dual Intel X540T2 NICs via I/O Module and PCIe add-in card along with a pair of Intel RS25GB008 SAS HBAs to provide connectivity to the Intel JBODs at the bottom.

Two of the Intel Server System R2208GZ4GC 2U dual socket servers were here for the last couple of months on loan from the Intel Technology Provider program. We have been using them extensively in our SOFS and Storage Spaces testing along with the other four servers that are our own.

One of the Intel Storage System JBOD2224S2DP units in the above picture is a seed unit provided to us by ITP as we are planning on utilizing this unit for our Data Centre deployments. The other two were purchased through Canadian distribution. Currently two are in a dedicated use configuration with the third to be used to test enclosure resilience in a Storage Spaces 3-Way Mirror configuration.

We have been acquiring HGST SAS SSDs in the form of first and second generation units with an aim to get into 12Gb SAS at some point down the road. We still have a few more first and second generation SSDs to go to reach our goal of 24 units total.

The second JBOD has 24 Seagate Savvio 10K SAS spindles that will be worked on in our next round of testing.

Our current HGST SAS SSD based IOPS testing average is about 375K on an 8 SSD disk set up in a Storage Spaces Simple configuration (similar to RAID 0):

image

We have designs on the board for providing enclosure resilient solutions that run into the millions of IOPS. As we move through our PoC testing we will continue to publish our results here.

We are currently working with Iometer for our baseline and PoC testing. SQLIO will also be utilized once we get comfortable with performance behaviours in our storage setups to fine tune things for SQL deployments.

Again, thanks to Scott P. and the Intel Technology Program for all of your assistance over the years. It is greatly appreciated. :)

Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Monday, 9 June 2014

Storage Configuration: Know Your Workloads for IOPS or Throughput

Here we have a practical example of how devastating a poorly configured disk subsystem can be.
image
The above was one of the first Iometer test runs we did on our Storage Spaces setup. The above 45K IOPS was running on 17, yes seventeen, 100GB SSD400S.a HGST SAS SSDs.
Obviously the configuration was just whacked. :(
Imagine the surprise and disappointment one would have supplying a $100K SAN and ending up with the above results after the unit was put into production and the client was complaining that things were not happening anywhere near as fast as expected.
What we are discovering is that tuning a storage subsystem is an art.
There are so many factors that one needs to keep in mind as far as the types of workloads that will be running on the disk subsystem right through to the hardware driving it all.
After running a large number of tests using Iometer, and with some significant input from fellow MVP Tim Barrett, we are beginning to gain some insight into how to configure things for the given workload.
This is a snip taken of a Simple Storage Space utilizing _just two_ 100GB HGST SSD400S.a SAS SSDs (same disks as above):
image
Note how we are now running at 56K IOPS. :)
Microsoft has an awesome, in-depth, document on setting things up for Storage Spaces performance here:
We suggest firing the above article into OneNote for later reference as it will prove invaluable in figuring out the basics for configuring a Storage Spaces disk subsystem. It can actually provide a good frame of reference for storage performance in general.
Our goal for our Proof-of-Concept testing that we are doing was around 1M IOPS.
Given what we are seeing so far we will hopefully end up running at about 650K to 750K IOPS! That's not too shabby for our "commodity hardware" setup. :)
Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business

Thursday, 5 June 2014

Cluster Starter Labs: Hyper-V, Storage Spaces, and Scale-Out File Server

The following are a few ways to go about setting up a lab environment to test out various Hyper-V and Scale-Out File Server Clusters that utilize Storage Spaces to tie in the storage.

Asymmetric Hyper-V Cluster

  • (2) Hyper-V Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD (must support SES-3)
In the above configuration we set up the node OS Roles and then enable Cluster. Once cluster is enabled we can import our not initialized shared storage into Cluster Disks and them move them over to Cluster Shared Volumes.
In this scenario one should split the storage up three ways.
  1. 1GB-2GB for Witness Disk
  2. 49.9% CSV 0
  3. 49.9% CSV 1
Once the virtual disks have been set up in Storage Spaces we run the quorum configuration wizard to set the witness disk up.
We use two CSVs in this setup so as to assign 50% of the available storage to each node. This shares the I/O load. Keep this in mind when looking to deploy this type of cluster into a client setting as well as the need to make sure all paths between the nodes and the disks are redundant (dual SAS HBAs and a dual expander/controller JBOD).

Symmetric Hyper-V Cluster with Scale-Out File Services

  • (2) Scale-Out File Server Nodes with single SAS HBA
  • (1) Dual Port SAS JBOD
  • (2) Hyper-V Nodes
For this particular set up we configure our two storage nodes in a SOFS cluster and utilize Storage Spaces to deliver our shares for Hyper-V to access. We will have a witness share for the Hyper-V cluster and then at least one file share for our VHDX files depending on how our storage is set up.

Lab Hardware

The HP MicroServer would be one option for server nodes. Dell C1100 1U off-lease servers can be found on eBay for a song. Intel RS25GB008 or LSI 6Gb SAS Host Bus Adapters (HBAs) are also easily found.
For the JBOD one needs to make sure the unit supports the full compliment of SAS commands being passed through to the disks. To run with cluster two SAS ports that access all of the storage installed in the drive bays is mandatory.
The Intel JBOD2224S2DP (WSC SS Site) is an excellent unit to work with that compares feature wise with DataON, Quanta, and the Dell JBODs now on the Windows Server Catalogue Storage Spaces List.
Some HGST UltraStar 100GB and 200GB SAS SSDs (SSD400 A and B Series) can be had via eBay every once in a while for SSD Tier and SSD Cache testing in Storage Spaces. We are running with the HGST product because it is a collaborative effort between Intel and HGST.

Storage Testing

For storage in the lab it is preferred to have at least 6 of the drives one would be using in production. With six drives we can run the following tests:
  • Single Drive IOPS and Throughput tests
    • Storage Spaces Simple
  • Dual Drive IOPS and Throughput tests
    • Storage Spaces Simple and Two-Way Mirror
  • Three Drive IOPS and Throughput tests
    • Storage Spaces Simple, Two-Way Mirror, and Three-Way Mirror
  • ETC to 6 drives+
There are a number of factors involved in storage testing. The main thing is to establish a baseline performance metric based on a single drive of each type.
A really good, and in-depth, read on Storage Spaces performance:
And, the Microsoft Word document outlining the setup and the Iometer settings Microsoft used to achieve their impressive 1M IOPS Storage Spaces performance:
Our previous blog post on a lab setup with a few suggested hardware pieces:
Philip Elder
Microsoft Cluster MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Chef de partie in the SMBKitchen ASP Project
Find out more at
Third Tier: Enterprise Solutions for Small Business