The Promise VTrak E610sD unit that we have been using for our IMS and Intel Server System based Hyper-V failover clustering had eight (8) 300GB 15K Seagate SAS drives configured in one Disk Array.
We added two more 300GB 15K Seagate SAS drives to the VTrak unit to test how a live RAID Migration would impact overall performance of the unit.
Event 42: DA_0 – June 24, 2011 1335Hours – RAID migration has started.
We had the VMs on the cluster shut down initially to bring all disk activity as close to zero MB/Second as possible.
When we went through the RAID Migration steps we made a point of preserving the VM’s LUN RAID 10 configuration as it wanted to change the configuration to RAID 1E.
Once we clicked Next, Submit, and Confirmed that we wanted the RAID Array Migration to run we saw the following:
That 0% sat there _for a long time_.
Meanwhile, with the VMs shut down we saw:
Based on that 28MB/Second number we figured that the RAID Array Migration process was going to take a while.
Well, it most certainly did:
Event 46: DA_0 – June 25, 2011 1159Hours – RAID migration has completed.
The process took around 22.5 Hours!
Now, we did fire up the four VMs running on the Hyper-V cluster not long after the above performance graphs snip was taken. So, we had SBS, SQL with LoB, and two Windows 7 desktop VMs running in production mode while the migration was happening.
We did some performance testing in the LoB as we had already been running some baseline performance tests for the cluster setting and saw very little if any impact on its performance.
The LoB is SQL, IIS, and .NET intensive.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book