Friday, 23 September 2011

How To: Set Up A Hyper-V Cluster Node or Standalone Server

All of our Hyper-V clusters are based on Hyper-V Server 2008 R2 RTM/SP1 as of this writing.

Cluster Storage Setup

The following steps are an overview of what we do after configuring all of the hardware and the first three LUNs on our centralized storage.

  1. 1.51GB for Quorum
  2. 106.52GB for Hyper-V common settings (RAM/Node * #Nodes)+ 10GB
    • Add more space for snapshots if they will be used.
  3. 145.11GB for first VHD (SBS 2011 OS partition for example)
    • NOTE: We use the .xxGB to denote _which_ LUN we are working with in the Hyper-V cluster’s Disk Management and subsequently in Clustered Shared Volumes management. Having eight 80GB LUNs spread across two storage devices would be tough to figure out which was which.
    • So, for an Intel Modular Server with built-in SAN plus a Promise VTrak RAID Subsystem we would do the following:
      • 1.51GB for Quorum.
      • 106.52GB for Common Settings.
        • We reserve xx.5xGB for system storage needs.
      • 145.11GB for the first LUN/VHD located on the IMS SAN.
        • xx.1xGB-xx.4xGB for each additional LUN/VHD on the IMS SAN.
      • 175.61GB for the first LUN/VHD located on the Promise VTrak.
        • xx.6xGB+ for each additional LUN/VHD on the VTrak.

When we open the node’s Disk Management we see each of the partitions with their size indicated correctly.

Using this storage size configuration process makes managing SAN/DAS based storage so much easier in the long run.

Intel NIC Teaming

First, a blog post on teaming using Intel's prosetcl.exe (also relevant for GUI):

Most of our Hyper-V deploys are Server Core based with standalone Hyper-V hosts for smaller deploys being the exception to the rule.

Host Setup Process

Our host setup process:

  1. Update firmware for _all_ components on the host first.
  2. Set up a bootable USB flash drive with the OS of choice.
    1. http://blog.mpecsinc.ca/2010/09/create-bootable-usb-flash-drive-larger.html
    2. Copy and paste the RAID driver and current Intel PROSet NIC driver (16.5 as of this writing).
    3. Copy and paste the tools listed in the blog post linked at the end of this post.
  3. Boot into the RAID controller’s BIOS and set up the RAID array.
  4. Reboot into the BIOS and verify boot order with the RAID array on top.
  5. Reboot into WinPE and load the RAID driver _prior_ to setting up a partition for the OS.
  6. Once OS is installed install the necessary drivers:
    1. Install: pnputil -i –a driver.inf
    2. Delete: sc query type=<driver>
    • NIC
    • chipset
    • SAS
    • System management tools
  7. Team the NICs.
    • Note that most server manufacturers have NIC MAC addresses listed in the BIOS.
    • We have at least two independent Intel NICs in the server with a minimum of 4 ports.
    • Teams are created between ports on separate NICs to create redundancy.
    • prosetcl.exe commands are listed in the ReadMe.txt
  8. Create VLANs
    • NOTE: Prosetcl.exe: In Server Core A TAGGED VLAN must be created before the UNTAGGED VLAN at the command line. This may or may not be the case in the GUI.
  9. Set up the various network IP structures via SConfig console.
  10. Make sure to disable "Register this connection in DNS" on _any_ NIC not on the management network.
    • CMD: netsh interface ipv4 set dnsservers name="Local Area Connection #" source=dhcp register=none
    • This eliminates any IPs being associated with the Hyper-V host or cluster nodes that are not on the management subnet.
  11. Move the Swap file to a dedicated partition next to the OS but before the data/VHD partitions.
    1. wmic.exe computersystem where name="ComputerName " set AutomaticManagedPagefile=False
    2. wmic.exe pagefileset create name="S:\pagefile.sys"
    3. wmic.exe pagefileset where name="S:\\pagefile.sys" set InitialSize=42950,MaximumSize=87550
    4. wmic.exe pagefileset where name="C:\\pagefile.sys" delete
    5. Reboot
  12. Make sure Network Discovery is enabled on the host.
    • Create and share a folder on Server Core to bring this about.
  13. Install MPIO if needed.
    1. dism /online /enable-feature:MultipathIo
      • Command _is_ case sensitive.
    2. mpclaim -n -i -a (claim all MPIO disks)
    3. mpclaim -L -m 4 (sets MPIO Mode - your choice will be dependent on the type of I/O needed)
    4. mpclaim -s -d (verify MPIO mode for all disks)
    5. mpclaim -v C:\MPIOReport.txt (creates a full disk report in a text file)
  14. MPIOCPL.exe opens the MPIO Control Panel for both FULL and CORE installs.
  15. Run through the SConfig steps in order.
    • image
    • Note that any non-management (domain) subnet will not allow a PING until after the Failover Clustering Feature is enabled.
    • Since the Heartbeat subnet and others would be a part of the Public Profile in the Windows Firewall with Advanced Security no PING will get through until after it is enabled.

Cluster Storage Considerations

If standing up a cluster make sure to have the Quorum LUN (will not be CSV) and common settings LUN (will be CSV) set up on the SAN/DAS. We also set up one LUN for our first clustered VM partition.

  1. On NODE 1:
    1. Open Disk Management.
    2. Refresh Disks.
    3. Note Disk # and initialize the LUNs.
    4. Format NTFS.
      1. We name our partitions:
        1. Quorum
        2. Common_Files
        3. SBS_OS
        4. SBS_Data
        5. etc . . .
    5. Set all LUNs to Offline.
  2. On NODE 2+:
    1. Open Disk Management
    2. Refresh Disks _after_ cancelling the request to Initialize.
    3. Verify Disk # and partition names.
      1. If the Disk # does not match up with NODE 1 then reboot.
    4. Set all LUNs to Offline

We use fixed VHDs by default on dedicated LUNs for each VHD required. We use VHDTool to create the fixed VHDs on the LUNs. We _always_ initialize the LUNs with a full zero set.

Once we have our nodes, networking, and storage set up we run the Cluster Validation Wizard. If completed successfully we stand the cluster up using the link found in the Wizard.

From there enable Cluster Shared Volumes and add the required storage.

Depending on the configuration all of the LUNs that would be required could now be configured on the shared SAN/DAS. Use the above method to initialize, format, name, and then set them to Offline on NODE 1. Then follow the steps required on each additional node.

Some helpful tools:

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

2 comments:

  1. Hi Philip

    Thanks for this info I am currently using/installing/discovering server core and it is nice to see some actual in the wild setup description, there seems to be a lot of "this is how it should work" but few actual real world discussions.

    Off topic, how many techs do you have in your shop and what is the admin staff to tech ratio?

    Thanks for sharing

    ReplyDelete
  2. A,

    You are welcome. :)

    Monique and I are principles with a pool of techs to call upon when things get really busy.

    Philip

    ReplyDelete

NOTE: All comments are moderated.