Showing posts with label Intel SR1695GPRX. Show all posts
Showing posts with label Intel SR1695GPRX. Show all posts

Thursday, 31 May 2012

LSI SAS6160 Switch Compatibility List

We are in the process of setting up a number of Intel Server System SR1695GPRX2AC units along with our soon to be two Intel Server System R2208GZ4GC units to one Promise VTrak E610sD RAID subsystem.

Out of the box we are dealing with 3Gbit/Second SAS connections with the VTrak so we are not able to use the more advanced features the SAS Switch offers.

The LSI SAS6160 Switch compatibility list can be found here:

Specifically we are looking for a replacement for the Promise VTrak that will give us access to SAS 6Gbit/Second and the advanced features offered by the SAS Switch.

image

Now, note that the Promise VTrak E610sD requires firmware 3.36.00. Our current unit is at 3.34.00. So, we are on our way to updating the firmware in the Promise before we can draw any conclusions as far as setting things up.

image

As far as a replacement for the Promise the first in the above list is actually a NetApp appliance. We will be looking into their products. We have already been in conversations with IBM over their DS3524 dual controller SAS unit so we shall see where that goes.

For now, we are on the road to bringing a very flexible, high performance, and highly redundant hardware solution online to deliver Hyper-V Failover Clusters as well as a Private Cloud solution.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Tuesday, 8 May 2012

LSI SAS6160 Switch First Look

We are looking to scale out our two node Hyper-V failover cluster configuration beyond the two redundant SAS connections per controller on the Promise VTrak E610sD RAID Subsystem.

So, we brought in a couple LSI 6Gbit/Second SAS switches.

image

We see the possibility of going beyond the maximum six nodes in the Intel Modular Server using Intel R1208GZ Server Systems (previous blog post) with dual Intel RS25GB008 SAS Host Bus Adapters.

It will be interesting to see when we connect all four external SAS connectors on the two RS25GB008 HBAs for a total of four quad port SAS connections running at 6Gbit/Second at each port what kind of throughput, bandwidth, and IOPs we will see.

We have the switch out of the box, plugged in, and powered up. We changed a Windows 7 VM’s IP address to 192.168.1.101 so that we would be able to connect to the switch at its default static IP of 192.168.1.100.

Note that Java is required to connect to the management console.

image

The default username and password:

  • Username: admin
  • Password: admin
  • Default IP: 192.168.1.100

Once logged in we are greeted with the SAS Domain Manager GUI:

image

From there we went into the Operations tab so we could configure networking for management:

image

Now, the next step in the process is to get the firmware updated if need be.

Out of the box this particular switch has 200.10 which has the publishing date of August 1, 2011.

image

So, in this case it looks as though we need to run through the upgrade process as indicated in the above KB article twice to get to the most recent version. The KB is very specific about the process too!

Note that the dates must be goofy on the downloads because the April 11th download is Phase 12 with a later version number than the firmware the switch came with.

Once we have run through the firmware updates we will be plugging in the four Intel Server System SR1695GPRX2AC servers with dual 3Gbit/Second SAS connections each into the two switches we have for a fully redundant configuration.

We will start with one Promise VTrak E610sD RAID Subsystem for destination storage. We will add a second E610sD once we have worked our way through the first few rounds of configuration and testing.

More to come . . .

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Tuesday, 24 April 2012

Intel Server System SR1695GPRX Server Board R&R Process

The process of doing a Remove & Replace for warranty purposes on an Intel SR1695GPRX is well documented in the Service Guide . . . well almost.

image

We put together a new SR1695GPRX for a specific purpose and found that one of the memory banks was bad on the board.

We have the replacement board in hand from Intel today and when we went to R&R the board we found a bit of a puzzle not found in the Intel Service Guide:

image

On the back of the server board that came with the Intel Server System was the silver plate that the 1U heat sink would mount to. After a few trial presses the plate looked to be in there quite solidly.

So, we experimented to see just what was keeping that plate stuck to the board.

It turned out that there was a thin gasket like material between the board and the plate. With a blunt object we used a bit of leverage at one edge of the board to push down on the heat sink mount peg from the top of the board.

It took a bit of effort but we were rewarded with a snapping sound and sure enough there was a thin layer of some sort of glue holding the two together.

Same pressure to the other plate peg and we were half way there. We were good to go with a bit of gentle prying using our fingers to get the other two pegs to break free.

Please note that the amount of leverage/pressure to be used may depend on the amount of time the whole setup was in production. This particular one was pretty much right out of the box so we were rewarded after just a bit of effort.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Friday, 20 April 2012

Windows 8 Server Beta Hyper-V Failover Cluster Is Live

This was one of the easiest set up processes to run to date with the new Windows 8 Server OS.

image

Using all native NIC teaming, Microsoft’s built in MPIO for the SAS based DAS storage (Promise VTrak), and some configuration tweaks on the two nodes we had a 100% showing in the Cluster Validation Wizard.

With the validation having run successfully we were able to stand the cluster up with no issues at all.

image

Awesome job Server and Hyper-V Teams!

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Tuesday, 8 November 2011

A Virtualized SBS 2011 and RDS Setup Completed With Some Pics

One of the projects we are just in the process of closing up was the following:

  • Intel Server System SR1695GPRX2AC
    • Intel Xeon X3470 CPU, 32GB Kingston 1066MHz ECC, Intel RS2BL040 RAID + BBU, 300GB 15K SAS in RAID 10
    • Intel Remote Management Module 3 (RMM3LITE) for KVM/out-of-band access to host.
  • Cisco SA520-K9
  • Cisco 48 Port Small Business Series Gigabit Switch
  • APC SMT1500RM2U UPS
    • Provides clean power to the networking components.
  • APC SMT2200RM2U UPS + AP9630 Remote Management
    • Provides clean power to the Intel Server System
  • APC NetShelter VX 24U Enclosure (AR3104)

The software for this solution (Open Value Agreement with the 3 year spread payment option) that covers all users:

  • Small Business Server 2011 Standard
  • Small Business Server 2011 Premium Add-On
    • Windows Server 2008 R2 Standard 1+1 for the host OS.
  • Windows Remote Desktop Services CALs
  • Office Standard
  • Windows Desktop OS Software Assurance + MDOP

Here are some shots of the finished server deploy:

image

  • APC AR3104 from the front.

image

  • APC AR3104 from the rear.

The finished product:

image

We are using SBS 2011 native backup and Windows Server Backup on the RDS server to back up the VMs to Microsoft iSCSI Software Target based VHDs located on the hard drive in the drive dock.

The RDS server hosts an LoB (Line of Business) application being delivered by RD RemoteApp along with Microsoft Office.

This particular client has two satellite offices that will be connecting to the LoB and Office via RemoteApp with a standard DSL Internet connection (3Mbit down and 1Mbit up).

Remote printing via RD RemoteApp is via HP LaserJet Professional P1606dn printers using the most recent HP driver (5.0.1 as of this writing).

We brought in our shop vacuum to clean out the entire area that the server equipment is sitting in. The dust settles over time, so we made sure to start with a clean slate plus we will pop in with the vacuum once every six months or so to keep that area as clean as possible.

A point to consider when it comes to deploying any IT solution into an SMB/SME business is that much of what we do is “virtual”. Meaning that we set up some physical boxes with a bunch of software products and then manage it.

What that means is that “Presentation is Critical”!

Make sure that the solution is tidy with cables tucked neatly away, all surfaces clean (Windex works great), and no papers/paperwork is left lying around the workspace if any.

Another important point when it comes to setting up or visiting a client’s site for any work is to “Leave the scene cleaner than the way we found it”!

That means cleaning up the mess left by the phone line technician or alarm technician that was in that space before us. We may not realize it, but the client notices the state of their space and who makes the messes.

In this case, our client, the business owner is very happy with what they have seen and used on their new IT Solution.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Friday, 12 August 2011

Hyper-V Failover Cluster 2 Node With 3TB RAW For Under $20K

As we have been going along we have been refining the two node Hyper-V Failover Cluster setup.

We are now at the point where the following base configuration would provide full Hyper-V failover capabilities for a very low cost:

  • Hyper-V Server 2008 R2 SP1 Host (Requires 2)
    • Intel Server System SR1695GPRX2AC
      • Intel Xeon Processor X3470, 32GB ECC RAM, Dual 3Gbit SAS Controllers (SFF-3470 connectors).
      • No local storage.
      • Pair of SFF-3470 to SFF-8088 SAS Cables
    • Promise VTrak E610sD Dual Controller RAID Storage System
      • 10x 300GB 15K Seagate SAS (we use RAID 10 = 1.5TB usable)
      • 2x 300GB 15K Seagate SAS global hot spares.
      • Nodes boot from 75GB masked LUN on VTrak to eliminate the need for local storage.
        • LUN Masking: Only the node will see its dedicated LUN as available.

With the above hardware configuration we are able to start our 2 node failover clusters at around $18K for the hardware setup.

For any client that currently has two or more servers and are conscious of the need for uptime we are promoting this setup.

At this price, there is no reason why we can’t be deploying Hyper-V failover clusters into most clients that have two good sized servers up for refresh ($6K-$8K/Server) or multiple small ones.

A big plus when talking about failover is the portability of the VMs. We can restore _anywhere_. Because of this portability we can promote our iSCSI backup target setup which then can eliminate the cost of any third party backup solutions.

Our goal is to go full on Windows OS native with as many tools as possible. And so far, we are being quite successful at making it happen.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Monday, 27 June 2011

Promise VTrak– RAID Migration Time To Add Two 300GB 15K SAS Drives To Existing Disk Array

The Promise VTrak E610sD unit that we have been using for our IMS and Intel Server System based Hyper-V failover clustering had eight (8) 300GB 15K Seagate SAS drives configured in one Disk Array.

We added two more 300GB 15K Seagate SAS drives to the VTrak unit to test how a live RAID Migration would impact overall performance of the unit.

image

Event 42: DA_0 – June 24, 2011 1335Hours – RAID migration has started.

We had the VMs on the cluster shut down initially to bring all disk activity as close to zero MB/Second as possible.

When we went through the RAID Migration steps we made a point of preserving the VM’s LUN RAID 10 configuration as it wanted to change the configuration to RAID 1E.

image

Once we clicked Next, Submit, and Confirmed that we wanted the RAID Array Migration to run we saw the following:

image

That 0% sat there _for a long time_.

Meanwhile, with the VMs shut down we saw:

image

Based on that 28MB/Second number we figured that the RAID Array Migration process was going to take a while.

Well, it most certainly did:

image

Event 46: DA_0 – June 25, 2011 1159Hours – RAID migration has completed.

The process took around 22.5 Hours!

Now, we did fire up the four VMs running on the Hyper-V cluster not long after the above performance graphs snip was taken. So, we had SBS, SQL with LoB, and two Windows 7 desktop VMs running in production mode while the migration was happening.

We did some performance testing in the LoB as we had already been running some baseline performance tests for the cluster setting and saw very little if any impact on its performance.

The LoB is SQL, IIS, and .NET intensive.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Thursday, 16 June 2011

2 Node SR1695GPRX2AC Hyper-V Cluster – Promise VTrak DAS I/O Bandwidth

We have a client’s SBS 2008 and SQL 2005/8 server being restored to our Hyper-V cluster running on our pilot two node Intel Server Systems connected to a Promise VTrak RAID Subsystem.

We already stood a Windows 7 Enterprise x64 desktop VM up on the cluster to test everything prior to running this restore.

The following are a couple of live performance graphs from the VTrak web GUI:

image

image

Once we have our client’s SBS and SQL up and running in the cluster we will install and run Passmark’s BurnInTest Pro on all of the VMs to stress the CPU, memory, and disk I/O subsystem.

We will then be able to get a idea of what kind of sustained I/O bandwidth the system can handle.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

2 SR1695GPRX2AC Node Hyper-V Cluster SAS HBA Configuration Change

We have been going through various processes of gaining a stable Hyper-V failover cluster using two nodes directly connected to a Promise VTrak E Series RAID Subsystem. At the beginning of this process we started out by installing an Adaptec ASC 1045 Host Bus Adapter for the second path to the VTrak in each node.

While Adaptec’s support site for the version 5.0.1.0 of the driver indicated that it was WHQL signed we ended up having to approve the driver install and subsequently the Cluster Validation called out the driver as unsigned.

image

Besides that, we would get a “Disk Error” during post from the Adaptec whenever we were ready to bring the VTrak online with shared storage after doing a complete wipe and reload.

LSi 3442E-R SAS HBA

We just finished running the Cluster Validation Wizard with the LSi 3442E-R HBAs (LSI00167) installed and were greeted with success on the drivers:

image

When it comes to installing the LSi 3442E-R HBAs there are a few things that need to be done due to the fact that they have an on board RAID chip:

  1. Cold boot the server.
  2. CTRL+C to enter unified LSi SAS BIOS.
    1. Boot Setting for both AXXSASIOMOD and LSi 3442E-R: BIOS ONLY
    2. Save and Exit for each component’s BIOS setting.
    3. Save and Exit for the unified SAS BIOS.
  3. Reboot.
  4. Set the Intel SR1695GPRX2AC server board BIOS boot disk order:
    1. Intel AXXROMBSASMR = 0
    2. 4GB OCZ ATV Turbo USB Flash Drive = 1
    3. VTrak LUNs 0-XX = 2+
  5. Save and reboot.
  6. F6 to enter Boot Menu.
  7. Boot to USB and begin RAID driver load and OS install.
    • Partitioning:
      • 55GB partition for the OS.
      • 75GB Swap File
      • Balance to local storage.

Besides the above, there is a tangible difference in the performance of each node with the LSi cards installed.

So, we will be installing the LSi 3442E-R (LSI00167) part into each node by default for our second SAS connection to the Promise VTrak. Note that this configuration requires a cable with an SFF-8470 to SFF-8088 connector on each end. A total of four cables is required.

And, at the end of cluster creation run number 3 we see:

image

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Wednesday, 15 June 2011

2x SR1695GPRX2AC Hyper-V Node VTrak E310sD enabled Cluster Quote

This is our basic outline for the two node cluster configuration we have been working on:

The Promise VTrak E310sD RAID Subsystem with 12 drive bays:

image

And our Intel Server System SR1695GPRX2AC server node configuration:

image

  1. 2x Intel Server System SR1695GPRX2AC servers.
  2. 1x Promise VTrak E310sD outfitted with 12x 300GB 15K Seagate SAS drives.
    1. Time to stand up the cluster on an existing domain.

Cost wise this configuration makes a lot of sense for clients that can be hit with thousands of dollars per hour for downtime during their peak season or anytime for that matter.

Whenever we have an opportunity to propose a Failover Cluster option where the Intel Modular Server is our primary we will fall back to a configuration like this to present as a second option.

Failover is an insurance policy. So, the better the insurance the higher the cost.

The Intel Modular Server platform provides a full level of redundancy and scalability where the 2 node standalone server configuration does not.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Tuesday, 14 June 2011

2 Node SR1695GPRX + VTrak E610sD Hyper-V Cluster Is Live

We are still ironing out a few things as far as how to set up the MPIO settings, but for the most part we are quite happy with the end results of our configuration testing.

A Hardware Problem

It seems that one of the Adaptec cards has taken exception to the current setup. We have not concluded as to whether the card has failed or needs a firmware reset as of yet. We need to determine _which_ card is causing the following on the VTrak:

image

47243 New Events

Last Event: 47272, Port 2 Ctrl 1, Info, Jun 14, 2011 08:44:20 – Host interface has logged out.

The corresponding error for a log in would follow the above. Note that none of the other three connections are throwing this error. As a result we are pretty sure it is one of the Adaptec cards since we connected like with like on the controllers.

  • Adaptec SAS:
    • Server 1: Port x on Controller 1
    • Server 2: Port x on Controller 1
  • Intel 1064e SAS:
    • Server 1: Port x on Controller 2
    • Server 2: Port x on Controller 2

We connected the cables in this manner just for this reason. We will be running through the VTrak’s Web GUI as well as the console session connected via Serial cable to see if we can figure out which port is which on Controller 1.

We have a pair of LSI 3442E-R SAS controllers on the way. We will replace both of the Adaptec cards with the LSi cards, flatten everything, and stand up a cluster for the third time.

Live Migration Success

This is a happy sign:

image

We initiated a Live Migration of the Windows Server 2008 R2 Standard Remote Desktop Services VM from Node-99 to Node-90.

image

The Live Migration ran successfully as shown above. The RDS VM was Live Migrated to Node-90. We left things alone for a while then Live Migrated the VM back to Node-99 without issue.

Cluster Events

There are a few Errors and one Critical error in the logs. All of them are from a problem with bringing additional LUNs online, formatting them, adding them as available storage in FCM, and then making them Clustered Shared Volumes.

The problems stemmed from a deadlock condition in the Cluster Resource Control Manager which we believe came about as a result of the problem path via the failed/flaky Adaptec card.

Conclusion

We are now confident enough to start proposing this Highly Available Failover Cluster to our clients.

Cost wise this HA Hyper-V cluster will be a very economical insurance policy against any single point of failure.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Monday, 13 June 2011

2 Node Hyper-V Cluster Update: We have MPIO Based Storage

Well, it looks as though the problems in the first run through were indeed driver related as well as process (which step goes before which) related:

image

We enabled the MPIO feature on the Hyper-V nodes after installing the updated drivers for both the Adaptec (which comes up unsigned despite Adaptec’s indicating they are) and the Intel SAS module.

  • Driver install command line:
    • pnputil –i –a Adaptec.inf
    • pnputil –i –a IntelSAS.inf
  • MPIO feature enable (TechNet):
    • Dism /online /enable-feature:MultipathIo
      • Note that this command is case sensitive.
    • mpclaim –n –i –a
      • Grab all of those disks
    • mpclaim –l –m 4
      • Set load balancing mode to “Lease Queue Length”
    • mpclaim –s –d
      • Shows the current load balancing mode:

New to Windows Server 2008 R2 SP1 Core and thus Hyper-V Server 2008 R2 SP1 is the MPIO Control Panel:

  • MPIOCPL.exe

Once we go through and configure the MPIO based storage there are a few more steps to take to configure and test the networking and then we will stand the cluster up.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Hyper-V Server 2008 R2 SP1 Cluster On Two 1U SR1695GPRX And A Promise VTrak E610sD A Go . . . Sort Of

Well, after a lot of working through the various steps required we did manage to stand up a Hyper-V Server 2008 R2 SP1 based cluster on our two node setup connected to the Promise VTrak.

The set up process needs to be refined and possibly the hardware setup may need to be tweaked.

We did manage to do a successful Live Migration of all of the VMs running on the cluster but we had a number of funky behaviours that need to be ironed out first:

  1. Disk performance was very poor.
  2. The Promise VTrak had a lot of initiator log on and log off messages.
  3. Microsoft MPIO may not have set in properly.
  4. SBS 2011 VM kept losing network connectivity.

We suspect that the primary source of the problems above are the drivers and possibly the Adaptec card.

So, we are starting fresh with an up to date driver set (we used Intel ProSet v16.2 in the last run through and now v16.3 with the same error). So, we may toss version 16 of the Intel ProSet drivers altogether if we experience the same network connectivity problems.

We suspect that the Adaptec card may not have what it takes to make this configuration run. We have a couple of LSi 3442R-E SAS cards on the way to swap out the Adaptec cards if today’s run through still has MPIO and SAS connectivity issues.

The LSi cards are due to arrive tomorrow from LSi.

Our configuration post for this setup is here: Hyper-V Two Node Cluster Setup On Intel Server Systems and Promise VTrak RAID Subsystem – Part 1.

Every post related to this trial run through will be found under the following tag since this is the base Intel Server System platform that we will be using: Intel SR1695GPRX

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Friday, 10 June 2011

Connecting The Two 1U Servers Dual SAS Connectors To The Promise VTrak E610sD

We now have the two systems put together, all of the firmware updated, and the RAID 10 array configured on the four 300GB 15K Seagate SAS drives.

The next task is to connect the SAS cables between each SAS connection on the 1U servers to the Promise VTrak E610sD RAID Subsystem as follows:

image

To us, the circle and diamond symbols above each port provide for a little confusion as it would seem to us that one symbol would be associated with the SAS Data IN path and the other with the OUT.

A close up shot of the VTrak SAS controller diagram:

image

Note the difference.

So, now that we have our servers physically connected to the Promise VTrak we will be firing it up, resetting it to factory defaults, and then getting our disk setup configured in preparation for the Hyper-V cluster.

  1. 1.5 GB LUN for Quorum.
  2. 96.01 GB LUN for Hyper-V cluster guest memory and configuration files.
    • Each node has 32 GB of RAM plus extra.
  3. 50.02 GB LUN for Windows 7.
  4. 50.03 GB LUN for Windows 7.
  5. 130.04 GB LUN for SBS 2011 OS partition.
  6. 130.05 GB LUN for SBS 2011 Data partition.
  7. 75.06 GB LUN for RDS on Win2K8 R2 SP1.

Note that we will initially be LUN Mapping the first two LUNs to both nodes only. This will save on the length of time the Cluster Configuration Wizard takes to test the setup.

We will leave the balance of the storage available for later.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Thursday, 9 June 2011

Intel Server System SR1695GPRX2AC RAID and SAS Integration

Here are some shots of one of the two nodes going into our entry level Hyper- V Cluster:

image

  • Intel Integrated RAID Module SRAMBSASMR

We took the stock cables that come with the SR1695GPRX setup and plugged them into the RAID controller.

Note the battery backup cable that is plugged into the RAID controller near the “bottom” relative to the picture’s up/down. It is the black braided cable with a white end.

image

The cache battery sits in its own tray that gets locked into a mount in the server chassis shown above.

Below is a shot of the back of the chassis where the Adaptec and the SAS I/O Module are installed:

image

The Intel SAS I/O Module is sitting just under the Adaptec card installed in the PCI-E slot.

Next up in the configuration process will be setting up the Promise VTrak RAID Subsystem storage for the cluster, the SAS IDs, and LUN Mapping.

From there we will be installing the Hyper-V Server 2008 R2 SP1 OS and configuring each node.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Wednesday, 8 June 2011

Intel Remote Management Module 3 – Error Opening Video Socket

We have hit a problem with the KVM setup in Intel’s RMM3:

image

Socket Error

Error opening video socket

This particular server was set up here in the shop using the RMM3 connection plugged into our local network. We took the server to our client’s site, plugged everything in, and subsequently went to test connectivity via one of our shop systems.

The RMM3 site came up okay but when we went to start a KVM session the above error happened.

A call into Intel’s support line did not yield any positive results either.

The proper method to try and ground zero both the RMM3 module and the BMC can be found at the bottom of the following Intel Support page:

If you need to reset to factory defaults (e.g. invalid user name or password)

A reset can be done with the syscfg utility.  Download syscfg for for the 5500 series or for the S3420GP series

  • Unzip the file and copy the folder UEFI_SYSCFG_V501_B23\ preferably to an USB key

  • Boot the server and enter the BIOS

  • Go to the Boot Manager menu and select the option to boot to EFI shell and enter

  • In the EFI shell prompt, type fs0: (0 is the device containing the sysconfig utility e.g. USB pen)

  • Browse to the directory with the sysconfig utility

  • The path has to be defined by entering: set SYSCFG_PATH fs0:\<syscfg_efi> where syscfg_efi is the folder containing all the files from the sysconfig utility

  • The settings can be reset by entering: syscfg –rfs

Given the above, we will be leaving a small amount of space on any bootable USB flash drive permanently plugged into the server and formatting it FAT32 so that utilities like this can be accessed via the EFI shell.

A post on the Intel Communities Web site was the only result in our searching:

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Tuesday, 7 June 2011

Hyper-V Two Node Cluster Setup On Intel Server Systems and Promise VTrak RAID Subsystem – Part 1.1

Okay, so the first run through on getting things together we discovered that things are not always as they seem. It is to be expected considering what we are trying to do here.

Our Intel Server System configuration that will be connected to a Promise VTrak E610sD RAID Subsystem has been changed somewhat to the following:

  • 2x Intel Server System SR1695GPRX2AC server systems.
    • Intel Xeon Processor X3470
    • 32GB Kingston ECC RDIMMs (4x KVR1066D3Q8R7S/8Gi)
    • Intel Integrated RAID AXXROMBSASMR
      • Replaced RS2BL040 as the Integrated RAID part does not take up the PCI-E slot.
      • Plugs into custom PCI-E connector near the front of the board.
    • Intel Battery Backup AXXRSBBU3
      • Replaced the AXXRSBBU7 for the RS2BL040 RAID Controller
      • Attaches to the ROMBSASMR for on board cache protection.
    • 4x 300GB Seagate 15K.7 SAS drives.
      • Drives will be configured in RAID 10 or 5 depending on drive size and client needs.
    • Intel Integrated Server RAID Module AXXSASIOMOD
      • Originally planned for two of these in one chassis. One will be installed.
      • The custom PCI-E ports at the back of the server board are not set up to allow for two modules and a dual module configuration is not available.
    • Intel RMM3LITE for remote management
      • Full out-of-band management for each node.
    • On Board NIC Configuration
      • First through Fourth NICs will be paired together and connected to the production network (two pairs).
      • Fifth NIC will also pick up an internal IP for node management.
      • Fifth NIC will also provide network connectivity for the RMM3LITE.
    • External SAS Cables
      • 2x SASx4 SFF-8470 to mSASx4 SFF-8088 cable in 2M length
        • Connects to the Intel SAS Module via SFF-8470
      • 2x mSASx4 SFF-8088 to mSASx4 SFF-8088 cable in 2M length
        • Connects to the Adaptec via SFF-8088
    • Adaptec 1045 (ASC-1045) non-RAID SAS HBA
      • Replaces the second Intel SAS I/O Module.
      • Sister card ASC-1405 with four internal SAS connectors is on the Intel THOL for the SR1695GPRX.
      • No guarantees on compatibility for this card though.

Once we get going with our testing we will test the following configurations:

  1. 2x Intel Server System SR1695GPRX2AC with 1 SAS connection (using RS2BL040 + BBU).
  2. 2x Intel Server System SR1695GPRX2AC with 2 SAS connections as per above.

Even though the ASC-1405 is on the approved list there are no guarantees that the identical card with the external connection (ASC-1045) will work in this configuration.

We are confident that the single SAS connection setup should work with the Promise VTrak E610sD configured for the SAS IDs on each Intel SAS I/O Module. This will be the base configuration we will present to our smaller clients that are interested in a High Availability option for their business.

If we are unable to get the dual SAS connection configuration to work we will be looking at some alternatives to the Intel platform to provide both the server and storage.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Saturday, 4 June 2011

Hyper-V 2x 1U/2U Node Setup – Document The SAS IDs

We had a change in plans for this weekend as the SBS 2003 to SBS 2011 migration we were supposed to be running this weekend will not start until this coming week.

So, we now have a day to test run putting together a Hyper-V Server 2008 R2 cluster based on the following:

  • Two Intel Server System SR1695GPRX2AC servers.
    • Intel Xeon Processor X3470, 32GB ECC Kingston RAM, RS2BL040 RAID + BBU, 300GB 15K SAS x4 in RAID 5, dual external Intel SAS connectors (2x AXXSASIOMOD).
  • One Promise VTrak E610sD RAID Subsystem.
    • Our demo purchase unit was the E610sD but the configuration will also work with the E310sD unit as they are identical other than the number of drives the unit can hold.
  • Four Adaptec 2231500-R mSASx4 cables.

When assembling the Intel Server Systems SR1695GPRX2AC it is important to note the SAS ID and position that particular SAS Module was installed into. We would then have a label with that SAS ID installed above the SAS port on the back of the server to simplify routing the Adaptec SAS cables to the Promise VTrak.

image

With that information in hand we will be better able to figure out the LUN Mapping when configuring the Promise VTrak unit storage for the cluster nodes.

Business Opportunity

The primary goal for this huge expenditure on our part in the way of time and product purchases is to verify and test it on behalf of our clients.

If our configuration and testing proves successful then we have three network refreshes over the next few months that we will be recommending this configuration for (or 1U/2U dual Intel Xeon Hex Core E5645/E5649 based nodes).

We see this High Availability configuration as a less expensive alternative to the Intel Modular Server where the IMS just does not make business sense or financial sense.

Cluster Node Configuration

  • NODE 1: IOMOD 1 SAS Address: _____________________________
  • NODE 1: IOMOD 2 SAS Address: _____________________________
  • NODE 2: IOMOD 1 SAS Address: _____________________________
  • NODE 2: IOMOD 2 SAS Address: _____________________________

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Monday, 9 May 2011

Setting Up an Intel S3420GPRX Entry Level Server Board In An Intel Pedestal Chassis

We are building out a number of pedestal based servers over the next couple of months that will be standalone Hyper-V hosts for SBS 2011 Standard, at least a Remote Desktop Services server, at least one Windows Desktop OS, and in a few cases a dedicated BlackBerry (BESx) server.

The perfect platform for this setup is the Intel Server Board S3420GPRX (Intel Ark Site) as it has four Intel Virtualization Technology accelerated Gigabit NICs that can be teamed together along with a fifth Gigabit NIC that can be bound to an Intel Remote Management Module 3 (Intel Product Site) for external Web based console access.

The catch with this setup though is that the S3420GPRX is designed to be installed in a 1U form factor chassis. So, the retail box comes with _no_ I/O shield for the back of the pedestal chassis.

However, we are able to order them . . . hopefully:

  • Intel S3420GPRX I/O Shield: AGPRXIO

We will see what our distribution channels say as we have put out inquiries to all of them and we have a request into our Intel sales representative.

So far, no one has indicated the what/where/when with regards to the parts. But, we remain hopeful.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer

Monday, 29 November 2010

Intel’s New Server System SR1695GPRX 1U 1P 4 Hot Swap Drive

We just received a new Intel Server System SR1695GPRX 1U uni-processor system that has four hot swap SAS/SATA drive bays capable of handling both 3.5” and 2.5” drive sizes out of the box.

image

This new Intel Server platform is an excellent fit for the SMB space in that it offers the same enterprise class 1U server features as a dual processor unit but at a single processor price level.

This configuration has been on our wish list for _a long time_ now.

  • 1U profile provides for lots of room in that 24U enclosure.
  • Single processor leads to a lower cost point based.
    • A single processor server board with extended RAM capabilities.
  • 4 hot swap drive bays give us access to RAID 10 configurations.
  • A full height PCI-E slot for a high performance RAID controller.
  • Intel Remote Management Module 3 (LITE) capable.
    • Takes over NIC5 to provide out-of-band console access via Web.

image

As we can see the server system also has dual redundant power supplies which is absolutely critical to providing our clients with needed hardware redundancy. On the right hand side at the bottom of the above shot are two ports where we can plug in a couple of additional I/O modules such as dual Intel Gigabit server NICs to bring the total NICs available on this box to eight (without the RMM3LITE NIC).

  • The SR1695GPRX is dual power supply capable (watch that part number!)
  • Can add another set of dual port Intel Gigabit server NICs.
  • Can add an entry level performance SAS RAID I/O module without using the PCI-E slot.
    • Some RAID I/O modules have the ability to add a cache battery.

Given that we can have up to eight NICs in this system we now have an excellent platform for a single server Hyper-V host when deploying SBS 2008 plus the second server or as an inexpensive node in a Hyper-V cluster attached to iSCSI storage for Clustered Shared Volumes.

image

The shot above is just prior to dropping the airflow baffle back on between the fans and the CPU and the memory.

The RAID controller is an Intel RS2BL080 with battery backup situated near the top of the shot (shiny object). We have an RS2BL040 on order but it will not be here until the end of the week. Since Intel RAID controllers can be inter-changed we have this Hyper-V server built up already and will swap in the production controller when it shows up.

From now on this particular hardware configuration will be our principle offering for any situations where we need a 1U server system whether the client is looking for standalone server systems, a couple of virtualized server operating systems, or even a small two node cluster based on two of these units and one or two NAS devices that act as iSCSI targets.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*Our original iMac was stolen (previous blog post). We now have a new MacBook Pro courtesy of Vlad Mazek, owner of OWN.

Windows Live Writer