Thursday, 7 June 2007

SBS - Hard Drive Partitioning Schemes

There doesn't seem to be one guideline out there as far as how to partition a hard drive, or hard drives for Small Business Server.

Do we RAID our drives? Most certainly! We need that extra level of redundancy.

Surely no one puts together servers with only one hard disk or two hard disks without redundancy in it anymore?

Over the years, we have come up with a number of different hard drive setups depending on our client's business size and data volume needs.

A small client, say 5 to 15 seats, with a small volume of data would be setup as follows (physical disk properties & setup first with partitioning sub'd) :

  • 2 x 320 GB RAID 1 array (non-hot swap)
    • Primary System Partition: C: (310 GB)
    • Swap and URL Cache Partition S: (10 GB)
  • 2 x 320 GB RAID 1 array (non-hot swap)
    • Data Partition I: (320 GB)
In the above setup, VSS snapshot data would be stored on the C: partition leaving the I: partition totally to client data.

We have clients with higher data volume demands, some growing as fast as 15 GB per month during their peak seasons.

For some of them, and our larger firms in the 25-45 seat range we would configure the following:

  • 2 x 320 GB RAID 1 array (hot swap)
    • Primary System Partition: C: (310 GB)
    • Swap and URL Cache Partition S: (10 GB)
  • 3 x 320 GB RAID 5 array - 620 GB usable (hot swap)
    • Data Partition I: (620 GB)
  • 1 x 320 GB Global Hot Spare (hot swap)
We keep all of the drives the same size for the principle reason of having the hot spare available for either of the arrays. That is why it is considered Global, because it can be inserted into any array on the RAID controller. A hot spare can also be dedicated to any one of the specific arrays.

For the larger data volume clients:

  • 2 x 500 GB RAID 1 array (hot swap)
    • Primary System Partition: C: (480 GB)
    • Swap and URL Cache Partition S: (10 GB)
  • 5 x 500 GB RAID 5 array - 2.0 TB usable (hot swap)
    • Client Data Partition I: (500 GB)
  • 2 x 500 GB RAID 1 array (hot swap)
    • VSS & Server Data Partition J: (500 GB)
  • 1 x 500 GB Global Hot Spare (hot Swap)
We put the swap file and ISA's URL cache (99.9% Premium installs) on a separate partition to prevent them from being fragmented on the system C: partition. It facilitates increased system speed.

The same reason is used for setting up the VSS (Volume Shadow copy Service) snapshot files on a different drive/partition than the volume that is being shadow copied.

Relative to the other components that are being configured in a server, hard drive storage is cheap. Seagate's ES Series Enterprise SATA drives command a 10% premium over their desktop drives for extra reliability built in. That figure is a small price to pay for drives designed to be always on and serving huge chunks of data on a daily basis.

At this point, SCSI/SAS does not provide the cost/size ROI benefit for pretty much all of our clients that SATA does. There are a few exceptions to that rule though not many in our SMB environment.

Of course, your mileage will vary! ;)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

8 comments:

Anonymous said...

Do you put the Exchange Message Store and Log files on that C:\ partition?

Philip Elder SBS MVP said...

This post is way out of date! :)

I will update it, but what we are doing now is:
+ C: 100GB-150GB OS partition
+ Includes Exchange
+ S: 25-35GB SwapFile
+ L: 100GB+ NetworkData

We are only doing RAID 10 on our SBS installs.

SBS 2008 gets a minimum of 10K RPM SAS with most having 15K RPM SAS.

SBS 2003 gets a minimum of 10K RPM SAS.

All will be hot swap and all will have at least one hot spare available in the event of a failed member.

We keep Exchange on C:\ because we move most other content on L: and due to disaster recovery situations where it was better to have the Exchange databases in one spot.

Philip

Mike said...

Phillip,

My plan for a new Dell R710 server is to virtualize two servers on this one box. I have 5-300Gb drives, and plan to create a raid 5 at the HW level, then create a child parition running SBS 2008 Premium(100Gb OS, 250Gb Exchange, 100Gb Other), and for the companion Server 2008 (100Gb OS, 550Gb Data, 100Gb other. Does this seem reasonable to you?

Philip Elder SBS MVP said...

Mike,

For better I/O performance, RAID 10 is the best option. Especially when it comes to the higher I/O demands that virtualized servers/workstations place upon the same I/O subsystem.

Philip

Mike said...

Philip,

Thanks for the reply. Could I do this with 5 drives (300Gb each) or would I need a 6th drive to achieve my carving up plan (SBS VM gets 100G OS, 250G Exchange, 100G misc, and Server 08 gets 100G OS, 500G Data, 150G Misc).
If you are mirroring as part of the raid 10 and I have 300Gb disks, and my server needs more than 300Gb then I need another disk, unless you can set up the raid 10 at the HW level and virtually slice up space completely independent of how he HW is, which I guess I'm not sure?

Mike

Philip Elder SBS MVP said...

Mike,

RAID 10 would give you 600GB total plus 1 hot spare.

SBS VM:
OS: 100GB
Swap: 25GB
Data: 225GB (incl. Exchange)

Server VM:
OS: 75GB
Swap: 25GB
Data: 150GB
Total storage: 600GB

How things get partitioned out depends on the role for the second server and its data requirements.

The above is how we would typically divy up the storage for the two VMs. We factor 5GB of storage per user for Exchange.

Philip

Philip Elder SBS MVP said...

Mike,

For clarity's sake, we would hardware RAID the 600GB RAID 10 array on the 4 drives and set the fifth as hot spare.

We would then create 1 VHD per VM and partition that VHD according to the above scheme for each VM. Since we are not dealing with hard drive setups on physical servers with extreme I/O needs, we would run with this configuration.

If there was a need for high I/O then we would configure things differently.

Philip

Mike said...

Philip,

That makes sense. I was getting hung up on whether or not VM's were tied to raid 1 from a space requirement. I understand better now. Thanks for the advice.

Mike