Monday, 26 September 2011

Serving VDI with RemoteFX Enabled

Trying to find any current information on RemoteFX and what GPUs we can use to run the feature in our Windows 7 VMs has been quite the challenge.

Our first stop in the process of finding information was this page:

RemoteFX CPU Compatibility

The first hardware consideration has to do with the server hardware and its capabilities:

  1. SLAT-enabled processor   The processor in the RemoteFX server must support Second-Level Address Translation (SLAT). In virtualization scenarios, hardware-based SLAT support improves performance. On Intel processors, this is called Extended Page Tables (EPT), and on AMD processors, it is called Nested Page Tables (NPT).
  2. GPU   At least one graphics processing unit (GPU) is required on the RemoteFX server. The GPU driver must support DirectX 9.0c and DirectX 10.0. If more than one GPU is installed in the RemoteFX server, the GPUs must be identical. The GPU must have sufficient dedicated video memory that is separate from system memory.

Now, there are two links to hardware related articles just below the above but they have not been updated in quite a while. So, we are left searching about for any possible leads.

One of the configurations we are looking for is to deliver the virtual desktops via Hyper-V Failover Cluster. In order to do that, we must take note of the caveat stated in the TechNet article:

11-09-26 Microsoft Important SymbolImportant

To use Live Migration, the source and destination RemoteFX servers must have the same GPU installed.

We have seen it also noted that a server with multiple GPUs installed into the same server must use the same card.

AMD and NVIDIA

Since we have two major video card manufacturers that have mentioned RemoteFX we did some searching on their sites for anything related to it:

  1. AMD (ATI) RemoteFX Search Results
  2. nVidia RemoteFX Search Results

As we can see, a 1U form factor is definitely out for both manufacturer’s products. Both the AMD 4GB part and the NVIDIA Tesla 6GB part take up two PCI-E slots.

Now that we have an idea of what is available in GPUs we need to look at our server configurations. For now, there are not a lot of choices in 2U form factor that have more than one or possibly 2 PCI-E 2.0 16x slots in them.

When we go above 2U we have options but from what we can see at this point we are looking at 4 CPU server systems to gain access to more than two PCI-E 2.0 16x slots truly wired for 16 lanes.

Once we get into that level of server configuration the $8K sticker price for the Tesla M2070Q starts to look a little more palatable.

RemoteFX GPU Performance

We now have a better grasp on where to find the necessary GPUs, but what about performance?

From the TechNet article linked above we find this grid:

image

The first thing we see is that the display resolutions are going to be a bit difficult to work with due to the fact that most folks have either 22” to 24” wide LCD monitors on their desktop. The 24: may have enough vertical resolution to handle 1280x1024 but that is about it.

Note the required amount of GPU memory for each resolution and number of monitors. In most cases users will run a single monitor with a few exceptions.

For two monitors running 1280x1024 at 175MB each session a 6GB Tesla would host about 30 sessions per card.

  • $8K/30 Sessions = $267/Session

For one monitor running 1920x1200 we are looking at about 25 sessions per 6GB Tesla.

  • $8K/25 Sessions = $320/Session

Given a 36 month solution life that cost is actually quite reasonable.

RemoteFX and Clusters

Now the kicker, one of the driving reasons behind our looking into RemoteFX is a client that is looking to do VDI for 130 to 150 desktops _on a Hyper-V Failover Cluster_.

This means that if we are looking to build a robust two node Hyper-V Failover Cluster that each node must be configured to run the _entire_ VDI environment.

So, we will be looking at some alternatives that will allow us to run three or even four nodes connected to the same Promise VTrak E610sD RAID Subsystem that the LSI SAS Switches will allow us to do.

Direct Attached Storage via SAS 3Gbit or 6Gbit quad port connections out performs Gigabit based iSCSI (even with multiple Gigabit ports via MPIO) in so many ways plus MPIO configuration for the SAS connections is a lot easier to work with.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

No comments: