Wednesday 30 November 2011

AMD (ATI) Catalyst 11.8 Drivers Available

One of our machines popped up a request to update the AMD (ATI) drivers on the machine:

image

Now, once we clicked through on the Express setup for the driver. Something along the way died out. Instead of clicking on any of the buttons just click on the Red X to allow the installation to move forward.

image

The install will complete successfully as indicated above.

We opened the Catalyst Control Center to verify that everything was good to go:

image

image

Everything checks out okay.

Hopefully this version is as stable as its predecessor. :)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Hyper-V Failover Cluster with Windows 8 – Well, Maybe Not Quite Yet

We had a few scheduling changes today so we utilized the little bit of extra time to see if we could stand a Hyper-V Failover Cluster up quickly using Windows 8.

image

First we will set up our NIC teaming.

Then, we need to figure out how to set up our MPIO storage which is not really too intuitive at this point as “MPIO” does not come up in any search results.

Ah … there you are:

image

Skip the Role Install step and:

image

And:

image

Apparently a reboot was not required:

image

Metro then has an MPIO icon that brings up the MPIO Control Panel.

image

image

Since our connection is SAS based we clicked the check mark and the Add button and eventually we saw:

image

After a reboot we saw the following on both nodes:

image

Okay, we have our storage. We then needed to set it up.

image

Once configured we set all of the shared storage disks to Offline.

Next up, setting up the teaming services in Windows 8 for the four NICs:

image

We make sure to click on the server name and then start the teaming wizard under the TASKS button near the top right of the TEAMS section. Note that we disconnected two cables to quickly find out which pair is which on the server.

image

image

image

We are now good to go to install the Hyper-V Role!

image

Once our servers had rebooted we moved on to the Failover Clustering Feature:

image

Oops:

image

Click Add Roles and Features link to the right shown below and:

image

Okay, so there may be some weirdness going on there. These are early bits so no worries.

When binding the vSwitch to the appropriate team make sure to take note of the following:

image

Because of the following:

image

The driver name is referenced when binding the vSwtich.

And there we stop.

image

No matter how we try to go about getting the Failover Clustering feature installed we are stopped in our tracks. :(

So, somewhere along the way we must have done some of the steps out of order to the point that the OS has taken exception to the way things are. Or, we hit a bug:

image

The above is a Snip from the DSIM.log under %windir%\Logs.

So, for now we have a note into Microsoft to see if there is another way to get things working or if we need to start fresh and run through the steps a little differently.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Monday 28 November 2011

Is Intuit QuickBooks 2012 Phoning Home?

We are in the process of setting up the Intuit QuickBooks 2012 database manager on a relatively new SBS 2011 setup.

This is our first time seeing this software so we were a little surprised by the following during the QB 2012 setup routine:

image

The installer looks like the following when ready to install:

image

image

image

And finally:

image

The database management install only took a few minutes for this particular setup.

The next step in this process was to update the firewall rules on the SBS 2011 (running virtualized) to allow the database manager and its components to function. Once the firewall rules were updated we recycled both QB services.

image

image

For the moment we will be leaving the “…DB20” service alone.

To test the setup we went to install QB 2012 on the RDS server on this particular network. Again we received the prompt:

image

We choose the option to only install the QB client:

image

image

image

Now, we go and install the Intuit QB Connection Diagnostic Tool:

image

Once completed we run the tool. When the tool begins its start-up process we see the following warning several times:

image

We then run the Test Connectivity tool:

image

When we were done we clicked on the Close button and received several more prompts for the workplace URL.

We have yet to dig into Intuit’s Privacy Policy and documentation to figure out just what information is being transmitted back home. Neither the QB install routine nor the QB Connection Diagnostic Tool have any links anywhere that have any relevant information either.

image

The Help link was not very helpful either.

For now we have a look around to see if we can find any further information.

For now, the following may give us a clue as to what is going on as it was a Snip taken earlier this year:

image

With the new URL it looks as though Intuit has brought the analytics in-house.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Failover Cluster Manager: Deleting a Cluster

Now that we have gone through our preliminary testing with the Huawei Symantec Oceanspace S2600 storage system, we are looking to reset the cluster nodes so that we can hook them up to another storage device for more testing or for client backup recovery purposes using a Promise VTrak E610sD.

To remove the cluster we start by deleting any highly available VMs from the cluster.

image

Once all VMs are removed we then evict Node-2 from the cluster.

The final step is to Destroy the cluster:

image

Are you sure?

image

After clicking the Destroy button we see:

image

Failover Cluster Manager then removes all reference to the previously existing cluster.

When we check Active Directory the object for the cluster was disabled:

image

We then deleted the AD object and moved on to clean-up on the nodes themselves which included removing the MPIO settings for the Oceanspace S2600.

We may leave them configured as they are for the next job or we may wipe them clean and start fresh. That all depends on what cluster related project is coming down the pipe!

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Huawei Symantec Oceanspace S2600 Configuration

We now have our RAID Group set up with the configuration set at RAID 10.

image

We then went ahead and configured our LUNs:

image

We have our two Intel Server System SR1695GPRX2AC units plugged in and turned on as can be seen under the Host section (4 Initiators).

image

At the bottom of the left hand column under Mappings in the ISM we find our four SAS initiators for the two nodes:

image

It took a bit to figure out just how to get the SAS IDs mapped to the LUNs. First we shut down the second node so that we would know which IDs belonged to which server:

image

We mapped them to the appropriate node:

image

Then we went to map the actual LUNs to the Hosts:

image

So, we moved to the Host Groups node in the left hand column of the ISM and went through the Add LUN Mapping wizard. Despite the way things looked, when we selected all of the LUNs they were given a sequential LUN ID.

image

That Host LUN ID drop down list near the bottom is a bit misleading.

We then had our LUNs mapped to the Default Host Group:

image

With that the shared storage showed up in both node’s Disk Management!

Now on to setting up MPIO in the Hyper-V node, running the Cluster Validation Wizard, and finally standing up a Hyper-V Failover Cluster for some further testing.

Some Initial Observations

Disk and LUN Configuration

Our primary DAS product has been the Promise VTrak E610sD and E310sD RAID subsystems. So, working with the Oceanspace S2600 has been a bit of a challenge since the management console and set up steps are quite different.

For one, the Oceanspace S2600 does not allow LUN level disk RAID configuration. The RAID Group that is configured with however many disks carries the RAID level.

This contrasts with the Promise VTrak’s ability to create a Disk Array and then set up whatever RAID level for each LUN we may set up in that Disk Array.

The cost is some storage space where RAID 1 would be specified for certain LUNs that would host low I/O needs such as RDS or SQL host OSs.

However, that would be balanced out with the fact that running RAID 10 underneath everything means that every LUN on the Oceanspace S2600 would receive the benefit of higher throughput.

Management Console

The Promise VTrak’s Web based management console is quite feature complex relative to the Oceanspace S2600’s console.

The plus side for the Oceanspace S2600 console is that it was quite simple to pick up on how to set up storage for our needs with very little reading needing to be done (primarily to get to know the nomenclature).

In the end, preference for one console over another is subjective. In both cases, the respective vendor’s Management Console gets the job done and that is what is most important to us.

MPIO Setup

It took a couple of reboots after removing any MPIO device references, then running:

  • mpclaim –n –i –a (reboot)

Finally, in MPIOCPL.exe we saw the following on Node-01:

image

We then saw:

image

After a minute or two we finally had:

image

One reboot later:

image

But, our second node was still blank:

image

We manually added the Device Hardware ID as per the other node:

image

After a reboot and another mpclaim –n –i –a we finally had doubles of everything! One reboot later and we finally had our shared storage configured and ready for the Failover Cluster Manager’s Cluster Validation Wizard!

Long story short we had a lot of problems getting one of our nodes (they are identical all the way through) to recognize both SAS paths to the Oceanspace S2600.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.