So, is this a good thing or a bad thing?
Yeah, this particular box has been up for 16 months.
In August of 2011 switch one of two failed. That switch just happened to be the one this particular box was plugged in to. Prior to that the box had been up and running since Server 2008 R2 was installed (just prior to R2’s RTM).
Its sole purpose in life is to serve Terabytes of data via a teamed Gigabit NIC pair.
Note the CPU setup. It has a pair of Intel Xeon 5130 CPUs and 16GB of RAM. Yeah, it’s a bit long in the tooth. ;)
Given the server’s role serving files we tend to leave it alone. No Web browsing, no desktop access, or any other access is needed unless something happens with it.
Well, today the backplane decided to hiccup along with the 500GB Seagate ES series drive that died in it (we’ve had a _huge_ fail rate on these drives over the last two or more years).
It is time for this old box to be retired.
Its replacement will be a Hyper-V Failover Cluster based on two or three Intel Server Systems SR1695GPRX2AC 1U servers with an Intel Xeon X3470 and 32GB of ECC RAM.
Is it a good thing for one to leave a box up and running for months or even years at a time?
Is the risk worth it?
In some cases we have no choice where three shifts run 24/7/365. Coordinated downtime is about the only way in to these boxes. Though SBS tends to start choking around the 90 day mark for both SBS 2008 Standard and SBS 2011 Standard so these reboots with patch cycles tend to happen every quarter.
To mitigate this situation we need to make sure we have good monitoring in place for edge access, AD authentication attempts (especially failures), proper edge configuration blocking both inbound and outbound packets by default, and other strategies like no touching the box/VM.
In the end, the risks need to be evaluated beside the benefits of no reboot cycles and/or no patch cycles for a lengthy amount of time.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book