Thursday, 7 January 2010

Intel Modular Server – Hyper-V Cluster NIC Setup

We need at least three NICs to get our cluster up and running. As we understand things, the cluster can be configured on a 2 NIC setup, but this is not optimal.

  1. NIC 1: Internal IP address for management.
  2. NIC 2: Cluster communication.
  3. NIC 3+: Dedicated to VMs.

Here are some screenshots of our configuration.

Gigabit Ethernet Switch Module 1:

image

Note that VLAN designated 99 will be for cluster communication and remain internal only (TBD). We still are not 100% clear on whether we need a physical switch for this part yet.

Ethernet Ports 1 through 10, which are the external ports, are mapped to VLAN 1. The management and VM NICs 2 through 4 are mapped to them.

Gigabit Ethernet Switch 2:

image

All available internal ports for NICs 3 and 4 are mapped to the external ports.

Now, the Hyper-V Server 2008 R2 OS has not necessarily picked up the NIC mapping as we would expect:

image

Note that Node 3 has not had the cluster subnet assigned to it yet as the IP is still 169.254.96.93.

Once we have all of the IPs in place, we will domain join the three Nodes.

From there, we have the Failover Cluster Management (TechNet article on Server 2008 clustering A+) feature installed on the separate 1U that we will use to run the Validate a Configuration Wizard.

Then we will hopefully discover where in our configuration we are lacking the proper configuration structures.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

4 comments:

Cparks said...

We have an IMS with Hyper-V failover cluster configured. We just added a second SWM, with Mezzanine cards to the 2 computing module. Wondering if you had any info on configuring the modules to use the SWM for redundant / failover?

Philip Elder Cluster MVP said...

CParks,

We have managed to figure things out to some degree but ran into some routing problems with tagged VLANs and teaming.

It turns out that this IMS Gigabit Switch FAQ page explains why.

We had set up our team with VM Load Balancing across the two switches. Instead, we should have set up the teams with Switch Fault Tolerance.

Intel PROSet Teaming explanation.

When we have a little more time we will be flattening our current lab cluster and rebuilding it. Once we have done that we will run through the teaming process again to verify that the switch fault tolerant teams work as expected across the network here.

Philip

CParks said...

Thanks, I have downloaded the driver and am in the process of configuring the NICs in Teams with Switch Fault Tolerance.

I plan to test (pull out a switch) next week.

I think this is last piece before putting the server into production. Thanks for all your help.

Philip Elder Cluster MVP said...

CParks,

Make sure you verify the binding order or Live Migration will fail.

Also, you don't need to pull the switch, you can use the MSC console to disable the switch ports for the nodes to simulate a failure.

Philip