Thursday 9 November 2017

Intel Server System R2224WFTZS Integration & Server Building Thoughts

We have a brand new Intel Server System R2224WFTZS that is the foundation for a mid to high performance virtualization platform.

image

Intel Server System R2224WFTZS 2U

Below it sits one of our older lab Intel Server System SR2625URLX 2U. Note the difference in the drive caddy.

That change is welcome as the caddy no longer requires a screwdriver to set the drive in place:

image

Intel 2.5" Tooless Drive Caddy

What that means is the time required to get 24 drives installed in the caddies went from half an hour or more to five or ten minutes. That, in our opinion, is a great leap ahead!

The processors for this setup are Intel Xeon Gold 6134s with 8 cores running at 3.2GHz with a peak of 3.7GHz. We chose the Gold 6134 as a starting place as most of the other CPUs have more than eight cores thus pushing up the cost of licensing Microsoft Windows Server Standard or Datacenter.

image

Intel Xeon Gold 6134, Socket, Heatsink, and Canadian Loonie $1 Coin

The new processors are huge!

The scale difference between the E3-1200 series, E5-2600 series is orders of magnitude larger. The jump in size reminds me of the Pentium Pro's girth next to the lesser desktop/server processors of the day.

image

Intel Xeon Processor E3-1270 sits on the Intel Xeon Gold 6134

The server is nearly complete.

image

Intel Server System R2224WFTZS Build Complete

Bill of Materials

In this setup the server's Bill of Materials (BoM) is as follows:

  • (2) Intel Xeon Gold 6134
  • 384GB via 12x 32GB Crucial DDR4 LRDIMM
  • Intel Integrated RAID Module RMSP3CD080F with 7 Series Flash Cache Backup
  • Intel 12Gbps RAID Expander Module RES3TV360
  • (2) 150GB Intel DC S3520 M.2 SSDs for OS
  • (5) 1.9TB Intel DC S4600 SATA SSDs for high IOPS tier
  • (19) 1.8TB Seagate 10K SAS for low to mid IOPS tier
  • Second Power Supply, TPM v2, and RMM4 Module

It's important to note that when setting up a RAID controller instead of a Host Bus Adapter (HBA) that does JBOD only we require the flash cache backup module. In this particular unit one needs to order the mounting bracket: AWTAUXBBUBKT

I'm not sure why we missed that, but we've updated our build guides to reflect the need for it going forward.

One other point of order is the rear 2.5" hot swap drive bay kit (A2UREARHSDK2) does not come installed from the factory in the R2224WFTZS as it did in the R2224WTTYS. I'm still not sold on M.2 for the host operating system as they are not hot swap capable. That means, if one dies we have to down a node in order to change it. With the rear hot swap bay we can do just that, swap out the 2.5" SATA SSD that's being used for the host OS.

For the second set of two 10GbE ports we used an Intel X540-T2 PCIe add-in card as the I/O modules are not in the distribution channel as of this writing.

NOTE: One requires a T30 hex screwdriver for the heatsinks! After installing the processor please make sure to start all four nuts prior to tightening. As a suggestion, from there snug each one up gradually starting with the two middle nuts then the outer nuts similar to the process for installing a head on an engine block. This process provides an even amount of pressure from the middle of the heatsink outwards.

Firmware Notes

Finally, make sure to update the firmware on all components before installing an operating system. There are some key fixes in the motherboard firmware updates as of this writing (BIOS 00.01.0009 ReadMe). Please make sure to read through to verify any caveats associated with the update process or the updates themselves.

Next up on our build process will be to update all firmware in the system, install the host operating system and drivers, and finally run a burn-in process. From there, we'll run some tests to get a feel for the IOPS and throughput we can expect from the two RAID arrays.

Why Build Servers?

That's got to be the burning question on some minds. Why?

The long and the short of it is because we've been doing so for so many years it's a hard habit to kick. ;)

Actually, the reality is much more mundane. We continue to be actively involved in building out our own server solutions for a number of reasons:

  • We can fine tune our solutions to specific customer needs
    • Need more IOPS we can do that
    • Need more throughput we can do that
    • Need a blend of the two as is the case here, then we can do that too.
  • Direct contact with firmware issues, interoperability, and stability
    • Making the various firmware bits play nice together can be a challenge
  • Driver issues, interoperability, and stability
    • Drivers can be quite finicky about what's in the box with them
  • Hardware interoperability
    • Our parts bin is chalk full of parts that refused to work with one another
    • On the other hand our solution sets are known good configurations
  • Cost
    • Our server systems are a fraction of the cost of Tier 1
  • Overall system configuration
    • As Designed Stability out of the box
  • He said She said
    • Since we test our systems extensively prior to deploying we know them well
    • Software Vendors that point the finger have no leg to stand on as we have plenty of charts and graphs
    • Performance issues are easier to pinpoint in software vendor's products
    • We remove the guesswork around an already configured Tier 1 box

Business Case

The business case is fairly simple: There are _a lot_ of folks out there that do not want to cloud their business. We help customers with a highly available solution set and our business cloud to give them all of the cloud goodness but keep their data on-premises.

We also help I.T. Professional Shops who may not have the skill-set on board that have customers with a need for High Availability and a cloud like experience but want the solution deployed on-premises.

For those customers that do want to cloud their business we have a solution set for the Small to Medium I.T. Shops that want to provide multi-tenant solutions in their own data centres. We provide the solution and backend support at a very reasonable cost while they spend their time selling their cloud.

All in all, we've found ourselves a number of different great little niches for our highly available solutions (clusters) over the last few years.

Thanks for reading! :)

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
Our Web Site
Our Cloud Service
Twitter: @MPECSInc

No comments: