All of our Hyper-V clusters are based on Hyper-V Server 2008 R2 RTM/SP1 as of this writing.
Cluster Storage Setup
The following steps are an overview of what we do after configuring all of the hardware and the first three LUNs on our centralized storage.
- 1.51GB for Quorum
- 106.52GB for Hyper-V common settings (RAM/Node * #Nodes)+ 10GB
- Add more space for snapshots if they will be used.
- NOTE: We use the .xxGB to denote _which_ LUN we are working with in the Hyper-V cluster’s Disk Management and subsequently in Clustered Shared Volumes management. Having eight 80GB LUNs spread across two storage devices would be tough to figure out which was which.
- So, for an Intel Modular Server with built-in SAN plus a Promise VTrak RAID Subsystem we would do the following:
- 1.51GB for Quorum.
- 106.52GB for Common Settings.
- We reserve xx.5xGB for system storage needs.
- 145.11GB for the first LUN/VHD located on the IMS SAN.
- xx.1xGB-xx.4xGB for each additional LUN/VHD on the IMS SAN.
- 175.61GB for the first LUN/VHD located on the Promise VTrak.
- xx.6xGB+ for each additional LUN/VHD on the VTrak.
When we open the node’s Disk Management we see each of the partitions with their size indicated correctly.
Using this storage size configuration process makes managing SAN/DAS based storage so much easier in the long run.
Intel NIC Teaming
First, a blog post on teaming using Intel's prosetcl.exe (also relevant for GUI):
Most of our Hyper-V deploys are Server Core based with standalone Hyper-V hosts for smaller deploys being the exception to the rule.
Host Setup Process
Our host setup process:
- Update firmware for _all_ components on the host first.
- Set up a bootable USB flash drive with the OS of choice.
- Copy and paste the RAID driver and current Intel PROSet NIC driver (16.5 as of this writing).
- Copy and paste the tools listed in the blog post linked at the end of this post.
- Boot into the RAID controller’s BIOS and set up the RAID array.
- Reboot into the BIOS and verify boot order with the RAID array on top.
- Reboot into WinPE and load the RAID driver _prior_ to setting up a partition for the OS.
- Once OS is installed install the necessary drivers:
- Install: pnputil -i –a driver.inf
- Delete: sc query type=<driver>
- System management tools
- Note that most server manufacturers have NIC MAC addresses listed in the BIOS.
- We have at least two independent Intel NICs in the server with a minimum of 4 ports.
- Teams are created between ports on separate NICs to create redundancy.
- prosetcl.exe commands are listed in the ReadMe.txt
- NOTE: Prosetcl.exe: In Server Core A TAGGED VLAN must be created before the UNTAGGED VLAN at the command line. This may or may not be the case in the GUI.
- CMD: netsh interface ipv4 set dnsservers name="Local Area Connection #" source=dhcp register=none
- This eliminates any IPs being associated with the Hyper-V host or cluster nodes that are not on the management subnet.
- wmic.exe computersystem where name="ComputerName " set AutomaticManagedPagefile=False
- wmic.exe pagefileset create name="S:\pagefile.sys"
- wmic.exe pagefileset where name="S:\\pagefile.sys" set InitialSize=42950,MaximumSize=87550
- wmic.exe pagefileset where name="C:\\pagefile.sys" delete
- Create and share a folder on Server Core to bring this about.
- dism /online /enable-feature:MultipathIo
- Command _is_ case sensitive.
- Note that any non-management (domain) subnet will not allow a PING until after the Failover Clustering Feature is enabled.
- Since the Heartbeat subnet and others would be a part of the Public Profile in the Windows Firewall with Advanced Security no PING will get through until after it is enabled.
Cluster Storage Considerations
If standing up a cluster make sure to have the Quorum LUN (will not be CSV) and common settings LUN (will be CSV) set up on the SAN/DAS. We also set up one LUN for our first clustered VM partition.
- On NODE 1:
- Open Disk Management.
- Refresh Disks.
- Note Disk # and initialize the LUNs.
- Format NTFS.
- We name our partitions:
- etc . . .
- Set all LUNs to Offline.
- On NODE 2+:
- Open Disk Management
- Refresh Disks _after_ cancelling the request to Initialize.
- Verify Disk # and partition names.
- If the Disk # does not match up with NODE 1 then reboot.
- Set all LUNs to Offline
We use fixed VHDs by default on dedicated LUNs for each VHD required. We use VHDTool to create the fixed VHDs on the LUNs. We _always_ initialize the LUNs with a full zero set.
Once we have our nodes, networking, and storage set up we run the Cluster Validation Wizard. If completed successfully we stand the cluster up using the link found in the Wizard.
From there enable Cluster Shared Volumes and add the required storage.
Depending on the configuration all of the LUNs that would be required could now be configured on the shared SAN/DAS. Use the above method to initialize, format, name, and then set them to Offline on NODE 1. Then follow the steps required on each additional node.
Some helpful tools:
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book