Showing posts with label SBS 2003 SP1 Premium. Show all posts
Showing posts with label SBS 2003 SP1 Premium. Show all posts

Monday, 18 May 2009

SBS 2003 to SBS 2008 Migration – WSUS, SUS and GPO Deleting Step Caution

The migration we are running this weekend is an SBS 2003 Premium RTM with SP1 installed to SBS 2008 one.

So, in this particular case, we have the manually created GPOs in place for the downloaded install of WSUS v2:image

The Migration steps call for the deletion of the following SBS 2003 R2 based GPOs:

  • Small Business Server Update services Client Computer Policy.
  • Small Business Server Update services Common Settings Policy.
  • Small Business Server Update services Server Computer Policy.

In this case, and any other where SUS/WSUS was manually installed on SBS 2003 RTM, we need to be mindful of which GPOs to delete.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*All Mac on SBS posts will not be written on a Mac until we replace our now missing iMac! (previous blog post)

Windows Live Writer

Friday, 15 May 2009

SBS 2003 to SBS 2008 Migration Begins Tomorrow

This being a long weekend in Canada, it is the time to take care of an in-depth project so that we do not impact our client’s business productivity.

The project we are running this weekend is an SBS 2003 to SBS 2008 migration.

Our client has purchased a new Intel X3370 Quad Core based server with Intel RAID and hot swap capabilities.

We have already done a practice run through using a ShadowProtect image to Hardware Independent Restore to a lab box here in the shop. Things went fairly smooth considering the box is around 4 years old and has been misbehaving every once in a while for the last year or so.

We will also be upgrading all of their Windows XP Professional workstations to Windows Vista. From there, we will be distributing Office 2007 Pro Plus via Group Policy, and configuring their new SBS 2008 domain along the guidelines we have for our other SBS 2008 clients.

We will be taking a basic hardware router with us for the initial migration process as their existing SBS 2003 will be repurposed as a dedicated ISA 2006 server that we will install at a later date.

The ShadowProtect images we will create when we start the process will remain separate and available if we need to fall back on the original hardware with SBS 2003 reimaged to the box.

Given the results of the practice run through, we are pretty confident that the migration process will be successful.

Happy Victoria Day Long Weekend to our Canadian readers! :)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*All Mac on SBS posts will not be written on a Mac until we replace our now missing iMac! (previous blog post)

Windows Live Writer

Tuesday, 14 April 2009

SBS Disaster Recovery with a Second DC Caveats

We went through one of the toughest recovery processes ever last year at this time.

The SBS had a catastrophic RAID array failure based on bad sectors that did not show up until a reboot for updates.

The slow corruption of the drive data found its way into the ShadowProtect backup images as well. Though, we were fortunate to have support via the StorageCraft forums on how to mount the image and run some clean-up utilities after the fact.

We had ended up managing to recover the SBS back here in the shop after we had moved the desktop clients over to their backup DC and mirrored shares. They were at least functional while we ran the recovery.

The recovery odyssey was chronicled in the following blog post:

Note that it is a rather long read as it covers the full gamut of the recovery process including all of the roadblocks presented by having a second DC on an SBS domain. There are a lot of pearls (Lieutenant Colonel Frank Slade) in that post! ;)

Even with all of the hiccups on the road to recovery, the only problem our client had the Monday morning we had Swung (SBS Migration) in the new hardware based SBS was with two users who had happened to have changed their passwords during the previous week while SBS was offline.

The scene:

  • SBS 2003 SP1 Premium on Intel hardware.
  • Win2K3 R2 Standard as backup DC, DHCP, mirrored shares, 2NIC NAT – team unpaired for failure state.
  • 18 users in an Accountant’s office (during peak season too).
  • Approximately 350-400GB of data to preserve.
  • Success Rate: 99.999% (two user’s password call)

Jeff Middleton’s SwingIT Kit tied into ShadowProtect’s backup and recovery abilities are the keys to this disaster recovery success.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

*All Mac on SBS posts will not be written on a Mac until we replace our now missing iMac! (previous blog post)

Windows Live Writer

Saturday, 4 October 2008

SBS - Event ID 537 NTLM Logon Errors Solved - Sorta

This was the last post on the subject among many since we began to see the errors: SBS - Event ID 537 NTLM Logon Errors - 0x80090308 and Trend.

Robert Crane has the fix: Login errors after Trend upgrade.

As per the comments also in our above post:
  1. In ADUC: Create a new User: Trend and set password: 0hReally?
  2. Add the user to Internet Users for SBS 2003 Premium to allow access through ISA.
  3. Set the username and password to Trend's Web Reputation proxy settings.

The errors should stop. The big thing is to make sure that both the user name and password do not combine to more than 14 characters. We cannot even do this: domain\Trend since the domain characters will also count.

We have all seen all manners of code slip by with some pretty funky bugs. This has to be one of the better ones to have slipped by the "quality control" people over at Trend.

For us, this situation, the lack of support, and the fact that one of our clients who was virus free until we installed Trend A/V on their systems pretty much puts the final nail in the coffin.

We will go back to Symantec for the time period between now and when our clients have SBS 2008 installed. From there we will run with ForeFront and Live OneCare on SBS 2008.

We are also in the process of proposing ExchangeDefender to all of our clients too. It is a very minimal cost per month, provides great protection for all incoming and outgoing e-mail, and provides a little extra monthly revenue for us.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Saturday, 2 August 2008

SBS 2K3 Premium - Configuring an SSL Wildcard Cert

Finding information on getting a third party SSL certificate installed on SBS Premium is a struggle.

In our case, we are looking to get away from the SBS self-issued certificate as much as possible. The amount of support related issues around that setup can be eliminated with the addition of a rather inexpensive investment in a third party certificate.

The process for setting up for the certificate is rather straight forward. The Official SBS Blog has a post on the initial part: How to Install a Public 3rd Party SSL Certificate on IIS on SBS 2003.

We create a dummy Web site in IIS, issued the certificate request from there, obtained the certificate from DigiCert, import it into the Intermediate Certification Authorities, and finally imported the certificate via the dummy site's certificate wizard. All of these steps are clearly outlined in the above blog post.

The blog author indicates that a further blog post is forthcoming on installing that certificate into ISA but none appear to be found.

The Configure Email and Internet Connection Wizard (CEICW) does have the ability to import a third party certificate, but it wants a *.cer file that does not seem to work from the many times we tried to get things configured that way.

So, that left us in a quandry: How do we get that certificate tied into ISA.

Having a little understanding as to how the CEICW configures both IIS and ISA together is a really important step to discovering how we need to get that certificate working.

With ISA installed on SBS, the configuration used to keep an end to end SSL tunnel between the user and IIS is called an SSL Bridge (MS TechNet Article).

When the browser requests https://rww.mydomain.com/remote and an SSL tunnel is established, ISA actually decrypts the tunnel to inspect the packets. ISA then re-encrypts the packets by establishing a subsequent SSL tunnel into the local IIS server.

When we look at the SBS ISA and IIS SSL setup from the user's perspective we see:

In this bridging setup, the key to realizing how we need to install the third party certificate can be discovered.

It is the Internet facing site that needs that certificate along with OWA, OMA, and direct SharePoint access.

The process is very simple:
  1. On the SBS server open the ISA manager.
  2. Click on the Firewall Policy item.
  3. Double click on any SBS xxx Publishing Rule that uses the SBS Web Listener.
  4. Click the Listner tab.
  5. Click the Properties button beside "SBS Web Listener".
  6. Click the Preferences tab.
  7. Under SSL: Click the Select button.
  8. The new third party certificate should be one of the available ones, click on it.
  9. OK.
  10. Apply & OK.
  11. Double click on the SBS Windows SharePoint Services Web Publishing Rule.
  12. Listener tab.
  13. Properties button.
  14. Preferences tab.
  15. Select button
  16. Choose the correct certificate as above.
  17. OK.
  18. Apply & OK.
  19. Apply in ISA Manager.
From an external client, connect to the Remote Web Workplace and view the certificate. It should reflect the newly installed third party certificate. Connect directly to the SharePoint Companyweb site: https://rww.mydomain.com:444/ and verify the certificate there.

An important note regarding SSL wildcard certificates: For Outlook 2003/2007 clients using Outlook Anywhere (RPC/HTTPS), the msstd:rww.mydomain.com setting in Outlook needs to be changed to: msstd:*mydomain.com in order to avoid this:


Microsoft Office Outlook

There is a problem with the proxy server's security certificate. The name on the security certificate is invalid or does not match the name of the target site rww.mydomain.com.

Outlook is unable to connect to the proxy server. (Error Code 0)
Some helpful links:
Now that we have discovered the process order and configuration steps, we are migrating all of our clients over to third party certificates.

Managing our client's SSL certification needs is one small service addition we have made to our managed services portfolio.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Friday, 1 August 2008

DigiCert Gets our vote for wildcard SSL certificates

In our search for a wild card SSL certificate (previous blog post), we looked into a large number of SSL providers.

While all providers will provide us with a *.mydomain.com wildcard SSL certificate, only DigiCert gives us the option to tag the certificate with Subject Alternative Names.

What does that mean? That means that we can setup our certificate to look somewhat like this:


  • *.mydomain.com
    • mail.mydomain.com
    • rww.mydomain.com
    • oma.mydomain.com
    • mydomain.com
In the case of Windows Mobile 5 devices, having the actual URL of the HTTPS site OMA will use to access Exchange listed in the SSL certificate guarantees that there will be no compatibility issues.

For those of our clients that have multiple SBS sites, or an SBS site with multiple branch offices, the wildcard SSL certificate will make things like Remote Web Workplace and other SSL secured Internet facing services simpler to access and manage.

Their price includes as many sites and servers needed. There is no price augmentation for additional sites, servers, or reissued certificate requests. The price is the price is the price! ;)

When we placed our order for a wildcard certificate, we heard back from DigiCert by phone within a couple of hours. Some questions needed to be answered to confirm our company's identity before the certificate release would happen.

Finally, their Web management interface is very straight forward to operate when requesting or managing our certificates.

For us SBSers, DigiCert is definitely a company to look at for your wildcard SSL needs.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Thursday, 31 July 2008

SBS - Exchange Information Store is Corrupt? Recreating the Store

A while back we dealt with a catastrophic failure where our backup images were also toast. Links to the relevant posts are at the end of this one.

The Exchange databases managed to be recovered, and they actually mounted on the new SBS box we used the SwingIT method to replace the failed box with.

The databases would mount initially, but later on, they would not mount without a manual start of the Information Store service.

It was clear that we managed to get things back, but not back to 100% on the Exchange side of things.

Given that all of these events occured during the client's peak season, we left well enough alone until the time was right for us to start working on clearing up the issue.

That time was yesterday. Total time into this project based on 15 user mailboxes at about 2.5GB was 12-14 hours. Note that a good portion of this time was in testing things before going ahead with the proceedure. A good ShadowProtect backup was created just before beginning the process.

So, we attempt an integrity check on the offline store:

ESEUtil /G - Database is CORRUPTED!

Then, we try to run ESEUtil in defragment mode:

ESEUtil Error -613

Things are not looking too good by this point.

One pair of last ditch efforts:
  • C:\Program Files\Exchsrvr\bin>isinteg -fix -t "c:\program files\exchsrvr\mdbdata\priv1.edb" -test alltests
    • Once this utility finished, no errors were found! Huh?!?
  • eseutil /mh "c:\program files\exchsrvr\mdbdata\priv1.edb"
    • Results with no errors, but current databases are in Clean Shutdown state.
Where to next? We need to get rid of the bad databases while preserving all of the client Exchange data.

Enter Exmerge: The key instruction set that is particularly applicable to SBS: KB313184: How to recover the information store on Exchange 2000 Server or Exchange Server 2003 in a single site. After reading the ExMerge Manual, the other resources above, and following this Knowledgebase article, we were good to go through the process successfully.

We made sure to shutdown both the POP3Connector and the SMTP service before going any further. We did not unplug the DSL modem as Internet access may have been needed at some point through the process.

Note that when it comes time to begin the import process, the SMTP service needs to be running. At that point we unplugged the DSL modem and started the service.

We extracted ExMerge and its .ini file to the Exchange bin folder (C:\Program Files\Exchsrvr\bin), once the above temporary permissions are set, we were required to log off then on again for them to take, we were able to successfully run the Exmerge GUI by double clicking on the application and initiating the Two Step Export.

We chose to export all mailbox related items but the folder permissions as no Outlook user was sharing anything from their profile. Note that in our case ExMerge choked on the Dumpster during the export for all 15 mailboxes.

Critical step: Make sure to manually export all Outlook clients to PST before beginning the Import Steps!

Once we had the ExMerge and Outlook PST files in hand, we made a backup copy of the Exchange MDBDATA folder contents. We then proceeded to rename the two .edb files and delete the rest of the files as per the instructions.

A broadcast email was sent via our test account to the organization caused the new store to fire up everyone's mailboxes just as the instructions state. This step is critical to getting ExMerge to recognize the mailboxes we want to import into.

Run ExMerge in import mode and we were eventually greeted with:

ExMerge Import succeeds with some errors

Most of the export and import errors were around the Dumpster, Deleted Items, and some Inbox issues.

The total volume of the stores increased by about 300MB once the process was finished.

A rebuild of the Offline Address Book will be required once the dust settles: KB 905813: You receive an error message when you try to synchronize the offline address list on an Exchange Server 2007 or Exchange Server 2003 server while you are using Outlook 2003: "0x8004010F".

For the Outlook clients themselves, once the process has completed, we need to kill the original Exchange Profile and reestablish it. There are keys associated with the user's Outlook OST and thus the existing OST will not work. Starting Outlook will only generate errors:

Outlook: Exchange is currently in recovery mode.

We made sure Outlook was not running, then via the Mail icon on both XP Pro and Vista we removed the default Exchange profile. In the case of those Outlook profiles that have not created an Archives.PST file yet, we created a Temp.PST file to point all new items to. The Outlook Profiles will complain if Mailbox - User Name is the only option.

Once an Exchange connection has been established by Outlook, and the post recovery email test message is in the user's inbox, we know that we are in the right place. For larger profiles the OST generation process will take some time, so be prepared to move onto another user profile while waiting.

If a Temp.PST file was created above, use the Data Files button to remove it once Outlook is happy.

One thing to keep in mind: When doing the manual PST creation from the Outlook clients, make sure to put those files somewhere on the network in one place. Once the entire organization's PST files are created, move those PST files, or rename the folder that they are in so that Outlook is not able to find them ... just in case.

The exported PST files will be needed ... that is almost 100% guaranteed.

Further blog posts related to this server crash: Further Exchange related reading and resources: This post is put together in what little time we have ... so it is not meant to be an exact step-by-step. It does flow according to the process, but the indicated documentation will facilitate the necessary deep dive that is required to get a complete grasp of what needs to happen.

So far so good, our client's users are happy. And, hopefully, we will no longer be hearing about strange Outlook behaviours!

This is one of the biggest recovery situations in the history of our company, and, also hopefully, it is now closed! :D

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Wednesday, 16 July 2008

SBS 2003 Premium - KB948110 SQL2K MS08-040 Caveat

One of our SBS 2K3 Premium servers stalled while running the security update for SQL 2000 KB948110: MS08-040: Description of the security update for SQL Server 2000 GDR and MSDE 2000: July 8, 2008.

We made sure to run the server updates first, then the Exchange ones, then we went to the SQL 2005 update for Service Pack 2.

After that, we ran the above update leaving WSUS V3 SP1 for last.

Susan makes a good point about SQL patching in her blog post: The Mess of SQL Patches. Essentially we can leave them lie for the most part.

Since we were here at the client site to talk about their upcoming server refresh, we also brought their backup drives to rotate out too.

Both servers needed updates so we ran them incrementally on both. We have the SBS 2K3 Premium SP1 with Windows Server 2003 SP2 installed and a Windows Server 2003 R2 Standard acting as a data mirror and second DC.

The Server 2003 R2 box took its updates with no issues.

The SBS 2K3 box hung up during the SQL 2000 update at the point of extracting the files. At least, that is the file process that came up when we tried to cancel the task and subsequently End Task on MSIEXEC when things had not changed 10 minutes later.

We ran an update to ShadowProtect to version 3.2 which also requires a reboot.

So, we rebooted the box and ran the WSUS SP1 update successfully.

For those running SBS 2K3 Premium and Standard there are caveats for this particular SQL 2000 update:
  • SharePoint users who upgraded from SQL Server 2000 Desktop Engine (WMSDE) to any other edition of SQL Server 2000 (for example, SQL Server 2000 Standard Edition) may be incorrectly offered a WMSDE update for this security release. This problem can occur if the SQL Server 2000 edition is not patched correctly with SQL Server 2000 Service Pack 4 after the upgrade from WMSDE. The WMSDE update may cause SharePoint to stop working. To resolve this problem if this occurs, follow these steps to restore SharePoint functionality...
  • Microsoft Internet Security and Acceleration (ISA) Server 2004 and ISA Server 2006 could be affected by this update in the following ways:
    • The MSSQL$MSFW service is stopped and then restarted when the associated database instances are updated. This action occurs if Microsoft SQL Server 2000 or Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) is installed on the computer that is running ISA Server. This action also stops the Microsoft Firewall service. Therefore, the SQL Server installer tries to return the Microsoft Firewall service to the same state that it was in before the update was started. Because the update installer cannot control services on a remote server, you must monitor and possibly restart the Microsoft Firewall service and the dependent services if ISA Server is configured for remote SQL Server logging.
  • ISA Server 2006 installs MSDE 2000 together with SQL Server 2000 SP4.
Take note of these issues prior to installing. Make sure to have a good backup ... just in case.

Links: Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Saturday, 26 April 2008

SBS Disaster Recovery - Second DC SBS Restore Caveats

The SBS domain recovery we worked on for a client that began last week has presented us with a number of foreseen and unforeseen challenges.

Once we had them online with their backup systems, the goal was to Swing their SBS over to new hardware. In this particular situation, we have a secondary DC installed on the domain.

It was installed for this very reason: To provide Active Directory, DNS, and Internet access along with VPN access to company files if needed.

It was our goal to use the Swing method to introduce a new SBS box utilizing the old SBS box as the starting point for the Swing.

We had also grabbed the most recent ShadowProtect images of the backup DC for any unforeseen needs.

Once the battle to settle the old SBS server down to some form of stability happened, the first thing we did was attempt to join a system to the SBS domain. This is the message we received during the attempt:

Computer Name Changes
The following error occurred attempting to join the domain "mysbsdomain.lan":
The directory service was unable to allocate a relative identifier.
This did not necessarily yield any clues at first.

After a bunch of searching around, the closest thing we could find was at the Experts-Exchange: The directory service was unable to allcate a relative identifier (keep in mind they are now subscription based) which in turn led to this MS KB article: KB248410 Error Message: The Account-Identifier Allocator Failed to Initialize Properly which is for Server 2000 and this one: MS KB article: KB822053 Error message: "Windows cannot create the object because the Directory Service was unable to allocate a relative identifier" also for Server 2000/3.

The KB articles gave us some repadmin tool commands to test things out that lead to some clues as to the source of the problem.

At this point, the old SBS box was plugged into a stand-alone Gigabit switch. The NICs had the appropriate IP setups and teaming, and the Internet NIC is plugged into our Workbench network just in case we need outside access.

We knew that the old SBS box could not communicate with the backup DC. This is a given since the SBS box was sitting in our shop and not the client's site.

However, not having communication with the backup DC should not be a problem right?

So, we figured that since the Backup DC was nowhere to be found, we would try something simple like adding a user on the SBS box itself via the wizard.

We ran the Add User Wizard.

This is the message we received:
You must be a member of the Small Business Server Administrators or Power Users group to create computer accounts. Contact your administrator.
Oh. So, we tried the Add Computer wizard and received the same message. But, we were logged in as the domain admin.

In the SBS Console however, we were able to open ADUC and make changes to object properties or GPMC and modify policies. So, this at least confirmed that we were into the server with a domain admin account and our domain admin privileges.

A more detailed search into the SBS event logs brought us to this log entry:
Event Type: Error
Event Source: SAM
Event Category: None
Event ID: 16651
Date: 4/19/2008
Time: 1:56:52 PM
User: N/A
Computer: MYFailed-SBS01
Description: The request for a new account-identifier pool failed. The operation will be retried until the request succeeds. The error is " The requested FSMO operation failed. The current FSMO holder could not be contacted."
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
The error does not make sense since the SBS box holds all FSMO roles. There were consistent NTDS KCC warnings in the logs too:
Event Type: Warning
Event Source: NTDS
KCC Event Category: Knowledge Consistency Checker
Event ID: 1308
Date: 4/19/2008
Time: 2:01:32 PM
User: NT AUTHORITY\ANONYMOUS LOGON
Computer: MySBSServer
Description: The Knowledge Consistency Checker (KCC) has detected that successive attempts to replicate with the following domain controller has consistently failed.
Attempts: 31
Domain controller: CN=NTDS Settings,CN=MyBackupDC,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=MySBSDomain,DC=LAN Period of time (minutes):
6902
The Connection object for this domain controller will be ignored, and a new temporary connection will be established to ensure that replication continues. Once replication with this domain controller resumes, the temporary connection will be removed.
Additional Data Error value: 8524
The DSA operation is unable to proceed because of a DNS lookup failure. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Again, these errors are to be expected as the backup DC was not online.

By this time we were firing up a Xeon 3070 based box to do a Hardware Independent Restore of our client's backup DC as it was looking like we were going to need it.

Finally, about an hour later, there was the final clue to the mess we found ourselves in:
Event Type: Warning
Event Source: NTDS Replication
Event Category: Replication
Event ID: 2092
Date: 4/19/2008
Time: 2:56:32 PM
User: NT AUTHORITY\ANONYMOUS LOGON
Computer: MySBSServer
Description: This server is the owner of the following FSMO role, but does not consider it valid. For the partition which contains the FSMO, this server has not replicated successfully with any of its partners since this server has been restarted. Replication errors are preventing validation of this role.

Operations which require contacting a FSMO operation master will fail until this condition is corrected.

FSMO Role: DC=MySBSDomain,DC=LAN
User Action:

1. Initial synchronization is the first early replications done by a system as it is starting. A failure to initially synchronize may explain why a FSMO role cannot be validated. This process is explained in KB article 305476.

2. This server has one or more replication partners, and replication is failing for all of these partners. Use the command repadmin /showrepl to display the replication errors. Correct the error in question. For example there maybe problems with IP connectivity, DNS name resolution, or security authentication that are preventing successful replication.

3. In the rare event that all replication partners being down is an expected occurance, perhaps because of maintenance or a disaster recovery, you can force the role to be validated. This can be done by using NTDSUTIL.EXE to seize the role to the same server. This may be done using the steps provided in KB articles 255504 and 324801 on http://support.microsoft.com.

The following operations may be impacted:

Schema: You will no longer be able to modify the schema for this forest.
Domain Naming: You will no longer be able to add or remove domains from this forest.
PDC: You will no longer be able to perform primary domain controller operations, such as Group Policy updates and password resets for non-Active Directory accounts.
RID: You will not be able to allocation new security identifiers for new user accounts, computer accounts or security groups.
Infrastructure: Cross-domain name references, such as universal group memberships, will not be updated properly if their target object is moved or renamed.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
We were still nowhere near having the backup DC restored on our box here in the shop. So, we created a VPN connection to the production backup DC and forced a replication across the VPN.

At first we were expecting the replication to take a while, but it was relatively quick, and we now had a viable SBS DC to work with.

We have now learned SBS Recovery with secondary DC valuable lesson number 1: When needing to recover an SBS DC that has other DCs in the SBS Active Directory forest, we need one of those DCs for the initial replication. Given that our recovered SBS DC was in good shape at that point, replicating with the production backup DC was okay to do.

However, if things were a lot more messed up, then the only option would be to have a recovered version of one of the other DCs attached to the recovered SBS' isolated network and forcing replication with it or using it as the source for the Swing Migration.

As soon as we saw a successful KCC message in the logs, we ran the Add User Wizard and sure enough we could create users and computers again.

Then came the sigh of relief. :)

Okay, so through the Swing steps we go to establish a new SBS instance on the new hardware we have in the shop.

We ran into a few initial hiccups in the Swing process, but they were relative to some the methodology itself.

Once we had the new SBS server finished, we delivered it to our client's site very early this last Tuesday morning. The intention was to bring it online while everyone was not in the office.

We shutdown DHCP on the backup server, reconnected the internal network cable to the box's second NIC, teamed them back up, and reset the IP and DNS settings on the team.

The new SBS box and the backup DC were not very happy to see each other at first. Replication failed either way.

Since they were not wanting to replicate, we needed to work with the highest priority which was to get the client machines moved over to the new SBS box. We created a startup script to do the following:
  • ipconfig /release (remove the IP settings given by the backup DC)
  • ipconfig /renew (reestablish IP settings to the new SBS box)
  • net use g: /delete (SBS Company folder)
  • net use h: /delete (Backup Company folder)
  • net use g: file://mysbsserver/company (data now online)
  • net use h: file://backupdc/companybu (now read only)
  • gpupdate /force (forces the client machines to pull GP from the SBS box)
The last step was critical for bringing things back together. We made sure all of the client machines that were online once we reached this point were sent into a reboot via a shutdown -r batch file on the server. We logged on as our test domain user account to verify share, Outlook, and Internet access. The ISA Firewall Client was connected and everything seemed to be working as it should.

We then made sure that the users that brought their laptops in that morning understood that they were to answer yes to the reboot question that came via the GPUpdate.

Their Office 2003 which is distributed via Group Policy Software Installation ran again causing a slow down on the initial boot.

But, once they were connected, their data and shares were as they expected them, and their Outlook was connected to Exchange and happy. Email was moving as it should.

We ran into a problem getting the new SBS DC and backup DC to replicate however.

The SBS box happily picked up the proper settings both in DNS and Active Directory to replicate with the backup DC. But, the backup DC would have nothing to do with the new SBS box.

In a way, this is expected behaviour given the new SBS box will have a totally different underlying identity to the original SBS box. We followed the entries in the following TechNet article: Troubleshooting GUID Discrepancies.

This is what we see from the NTDS properties of an SBS box and the corresponding AD DNS entry:

NTDS DNS Alias and its corresponding entry in _msdcs.mysbsdomain.lan

When we had a look in _msdcs.mysbsdomain.lan on the new SBS box and there were indeed multiple entries for the old SBS box, the new SBS box, and the existing backup DC.

DNS on the backup DC also had multiple entries, so we cleaned out the wrong ones and ran replication again. Still no go. The new SBS server happily tried to connect to the backup DC, but the Backup DC would not connect to the new SBS box. We were getting Access Denied messages in the logs whenever we forced replication or it ran on its own.

Another clue was in the fact that we could not access any shares or network resources on the new SBS box from the backup DC but we could the other way around.

Looking into the ServicePrincipalName cleanup suggested in the above article, we made the necessary changes.

ServicePrincipalName Cleanup: Remove the old entries, paste the new GUID in place, and Add

There are two entries in the SPN that needed to be changed.

After the cleanup and a reboot of the backup DC, they still would not replicate. That GUID alias in the NTDS properties under DssSites.msc would not change to the new SBS server's GUID. It still had the old SBS server's GUID there.

Given the amount of time we were fighting to get them to replicate by this point, we decided to DCPromo the backup DC to demote it and DCPromo it back in again.

That failed!

From the command line we had to DCPromo /forceremoval on the backup DC to get it to demote. That worked.

But that still left us with the new SBS server and all of the backup DC references in Active Directory. However, we knew that would be the case as the Swing Migration steps prepared us for what was next: Utilizing NTDSUtil to perform a metadata cleanup of the backup DC settings, and a cleanup of any reference to the backup DC in ADSIEdit.msc. We also needed to clean up DNS of any reference to the old backup DC's GUID.

We doubled back over our work to make sure there were absolutely no AD settings left for the backup DC. Once satisfied, we DCPromoed the backup DC back into the AD forest.

After a reboot, they were both happily replicating!

We have now learned SBS Recovery with secondary DC valuable lesson number 2: When we go to reintegrate a newly installed SBS server that was a part of a disaster recovery process, we may need to demote any and all secondary and tertiary DCs.

In this case, the secondary DC was in the same office, but, in the case of a branch office scenario where there are a couple of other offices out there, this could present a real rats nest to get things up to speed AD replication wise.

Now, what we experienced could very well be an anomaly where the edits done to the SPNs on the other DCs may in fact take and the issue stops there. They go on to replicate with no further issue.

For your reference: This was one of the more challenging disaster recoveries we have had to face yet.

We have been very fortunate that none of our clients have totally lost a location, but we came close once with one of our clients where the entire building's roof rained a deluge of rain water into a server closet of one of our clients at 03:30 in the morning. That was a scary call. It seemed that building maintenance has not gotten around to cleaning out the roof drains and the drain just above the closet was the one to give way. :(

Since we have both the old SBS and backup DC up and running and replicating happily here in the shop, we will be running a couple of test Swing Migrations to see if that second DC causes problems in a nondisaster recovery SBS domain migration too.

The step after that will be to see how the new Server 2008 Active Directory schema extensions for a Read Only DC at a branch site impacts our SBS 2003 to 2003 and 2003 to 2008 migrations.

Thanks for reading! :)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Thursday, 24 April 2008

Post SBS Migration - Outlook RPC/HTTPS - change that certificate

After completing a Swing Migration, one thing to remember after the new SBS server is in place: Users that use Outlook to connect to Exchange on their laptops via RPC/HTTPS will no longer be able to connect.

This is the case for those SBS installations that are using the self-signed certificate. One should not encounter this problem after importing their Internet trusted certificate.

If the user visits the Remote Web Workplace in IE, the SSL lock will show, and nothing will appear out of the ordinary.

IE does not notice that the previous SBS certificate signer is no longer in existence, but Outlook does.

So, we need to remove that certificate:
  1. Open IE
  2. Tools
  3. Internet Options
  4. Centent tab
  5. Click the certificates button
  6. Trusted Root Certification Authorities tab
  7. Click on the myrww.mysbsdomain.com certificate
  8. Click the Remove button
  9. Close
  10. Apply and OK
  11. Restart IE and import that certificate again.
The user will now be able to connect their Outlook while out of the office.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Wednesday, 23 April 2008

SBS Disaster Recovery - Finished

Wow ...

After one full week of working on the failed SBS from last week, many attempts to Swing the old SBS onto new hardware, and recover data, we are finally finished.

The new SBS box was installed very early yesterday morning. We brought it online, shifted the client desktops off of the backup server, and verified their data.

Outlook, their shares, printers, and passwords were as they were with the exception of a couple of users that needed to change their passwords while on the backup DC.

A number of years ago, Jeff Middleton, the author of the SBS Swing Migration was up here in Edmonton sharing some of his Katrina Experiences. It was an amazing and very inspiring time for myself personally.

One of the questions he was asked was about introducing a second DC into the SBS network to provide some redundancy for Active Directory. His response as I recall it was something along the lines of, "That introduces a whole new can of worms".

He wasn't kidding. We ran into all sorts of hiccups, hurdles, and road blocks recovering and subsequently installing the new SBS because of the backup DC.

Since there are a number of things sitting here needing attention, some chronicles on the SBS disaster recovery (DR) will be forth coming.

This DR was probably one of the most challenging we have faced to date.

But, with the time, tools, and talent that we have we triumphed! And, most importantly: Our client is very pleased with the outcome.

We were able to salvage Active Directory - meaning no desktop profile impact at all, all of their data, and bring them back over to the new SBS server with a reboot and a couple of extra startup scripts.

More to come ... and a nod to Jeff Middleton: Thanks! Those Swing skills came in very handy with this recovery. :)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Wednesday, 16 April 2008

Images No Good ... Catastrophic SBS Failure ... Now What?!?

In what turned out to be an SBS catastrophic failure yesterday, we were purposed with installing some fresh hard drives and restoring the OS and data partitions via a ShadowProtect (SP) backup image set.

Well, things did not go even close to plan. We had hoped that we would be in and out in under 4 hours under optimal conditions.

It did not matter how many times, which image version we used, or combination of array sizes in the SRCS16's BIOS settings, we could not get a successful recovery. After one successful restore we went into the Recovery Console to run CHKDSK against the troubled partitions. After that, the OS choked on a missing sys file. :(

Of the times that we did manage to get the SBS booted up, we found a plethora of Event ID 55s in the logs.

On many of the OS boot attempts we were greeted with:

Checking file system on H:

So, it began to look like the corruption ran pretty far back into our backup image sets.

We all know that hindsight is 20/20! ;)

So, in hindsight, the most expedient method of recovering this server would have been to SwingIT off the original hardware and SwingIt back onto a fresh install of SBS on that original hardware. Given that we did not know we would end up being on-site for 12 hours making recovery attempts and eventually rolling out the backup DC setup to provide authentication and shares it was not a viable option until well into the wee hours of the morning.

We now have the go ahead to Swing onto a new server instead of back onto the existing one. Since they are up and running, we took out the old SBS box that has a somewhat stable recovery on it ... though not stable enough for production ... to use in our Swing Migration.

For now, they are running via the backup DC and data mirror, along with the backup DC providing Internet access via RRAS and a second NIC. It is not ideal as there are a number of network dependent applications that required some fiddling to get working, but at least they are not twiddling their thumbs and loosing money hand over fist.

This is one scenario where having our client's email setup as follows pays off:
  • MX 100 ispmailserver1.myisp.com
  • MX 50 ispmailserver2.myisp.com
  • MX 25 mysbsmail.mysbsdomain.com
The ISP email is pulled down to the SBS box via the POP3Connector which is set to 1 hour intervals.

At least for now they still have access to the outside world via Webmail and the server will get all of their incoming mail when it comes back online. Any critical emails can be BCCd back to themselves for later download.

While we have tried to keep the impact on our client's business down to a minimum, there have been a number of hiccups before things started to settle down. So, to provide our client some restitution for the lost time, we will provide some of our billable time to them at no cost.

Since they are our longest running client at close to 10 years now, it only makes sense to have a little give-and-take in the business relationship.

There are a couple of important lessons here:
  • Test those SP backup images by restoring them
  • Test their durability by restoring them to different hardware
  • Having a second DC can provide an Active Directory source for a Swing Migration in the event of a total SBS failure.
In our case, we were guilty of not having enough time to run through their more recent image to do some restore tests. This too is another motivation to give our client a break on the otherwise very expensive I.T. week they are having.

Given the volume of work with this situation, and others, there may be a smattering of blog posts for a while ...

Thanks for reading and supporting us! :)

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Tuesday, 15 April 2008

SBS, ShadowProtect, and an Event ID 55 NTFS Error ...

We went in to a client site yesterday evening to finish a warranty swap of some flaky memory sticks.

Up to the time that the server was shut down last night, there were no indications of any problems with the two RAID arrays, a 500GB RAID 1 and 1TB RAID 5, on the box. They were partitioned as follows:
  • RAID 1
    • 50GB OS partition
    • 430GB ServerData
    • 10GB SwapISACache
  • RAID 5
    • 1TB NetworkData
The unit is a 3 year old SR2400 with dual 3.0GHz Xeons and 4GB of RAM. We had changed out the hard drives to give the server more storage about two years ago and did a warranty swap out of the Intel SE7520JR2 motherboard (previous blog post) about a month ago twice. The first swap was a struggle to get the replacement SE7520JR2 board to recognize the 4 sticks of Kingston RAM. The second swap was because the first warranty replacement board lost 2GB of RAM on a patch reboot. :(

Once we had the new Kingston RAM sticks installed, we fired up the box we went straight into the BIOS to confirm that all 4 1GB sticks were there which they were.

Subsequently, when booting into the OS, the initial Windows Server 2003 scroller kept going and going. While not necessarily a bad thing, after three years, we know how long this particular box takes to boot.

The heart started to sink at that point.

Then the kicker:


Once of your disks needs to be checked for consistency

Um, this NetworkData partition is a 1TB RAID 5 array! The previous ServerData partition took only a few minutes. For this partition, it looks like we are going to be here for a while ...

I: NetworkData is 95 percent completed.

Well, okay ... maybe not ... but then ...

Inserting an index entry into index $0 of file 25.

We have all had that one minute seems like an eternity experience when something stressful like this was going on. The amount of time that the above message scrolled on the screen seemed like one even though it may have lasted only 3 or 4 minutes.

The only thing that kept us from hitting that reset button was the fact that the 4 drives in the RAID 5 array where this was occurring were pinned. Meaning, their lights were on constantly due to disk activity.

Then a little light in what was seemingly turning into a catastrophic failure:

Correcting error in index $I30 for file 9377.

This went on for over 20 minutes.

Then we faced something one hopes to never face that late at night expecting to pop in and pop out for a quick task ... the proverbial nail in the coffin - a catastrophic failure:

An unspecified error occurred.
.

It is at this point that it has become pretty clear that we were in for the duration.

But then ...

Windows is starting up.

Again ... must resist pushing buttons (best Captain Kirk voice) ... keeping those fingers tied up and away from the power and/or reset on the front of the server. Just in case, we left it alone. And, thankfully, the above screen is what we were greeted with.

Soon we saw:

The Active Directory is rebuilding indices. Please wait

This stage took a couple of minutes. The Initializing Network Interfaces stage took another 10-15 minutes.

We were eventually greeted with a Services Failed to Start error and subsequently the logon screen.

*Phew*

It looks as though the OS partition has made it through this relatively unscathed. The service chokes were for SQL, WSUS, WSS, and a LoB application that had their databases stored on one of the soon to be discovered absent partitions. Exchange had also choked.

One lesson in all of this: A server may stay up and running almost indefinitely when experiencing a sector breakdown on a disk member or members of the array. To some degree the RAID controller will compensate. However, in our experience, as soon as the server is downed, or rebooted, those sector gremlins can jump out and make their presence known as was the case here.

Another lesson from this: We keep the Exchange databases on the OS partition for this very reason. If the Exchange databases were on a different partition and/or array and it fails, we loose Exchange and email communication. If we have an SBS OS that boots with a relatively happy Exchange ... the databases intact too, then at least our client will not loose their ability to communicate with the outside world while we would be working on the data recovery side of things.

Back to this SBS box: Once into the OS, we were able to eventually get into the Event Logs and sure enough, out of the four partitions, the three besides the OS partition were toast.

Amazing ... simply amazing.

From the server Event Log:
Event Type: Error
Event Source: Ntfs
Event Category: Disk
Event ID: 55
Date: 4/15/2008
Time: 7:01:32 AM
User: N/A
Computer: MY-SBS01
Description:
The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume .

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 0d 00 04 00 02 00 52 00 ......R.
0008: 02 00 00 00 37 00 04 c0 ....7..À
0010: 00 00 00 00 02 01 00 c0 .......À
0018: 00 00 00 00 00 00 00 00 ........
0020: 00 00 00 00 00 00 00 00 ........
0028: 81 01 22 00 .".
The above Event Log messages were numerous.

Just in case, we initiated the ChkDsk utility from within the GUI. It too crashed on the two critically needed partitions.

We made sure that the relevant services that had folders on the ServerData partition were shutdown, and we fired up ShadowProtect to bring that partition back. We were fortunate that this particular partition recovery cooperated and we were able to fire up the relevant services and their LoB app that had the database server logs on that partition too.

The 1TB RAID 5 array did not cooperate at all. Even after a 10 hour plus SBS OS initiated ShadowProtect restore attempt that we began very early this morning. It failed and caused the SBS server to spontaneously reboot around 10:30AM this morning. This also means no LoB application for now. We are fortunate that it was not critical to the daily functioning of our client's business.

So, where does that leave us?

In this client's case, we have a backup DC that also has a live data mirror on it! So, with SBS at least functional, we were able to email users a simple batch file to disconnect the original Company share and connect them to the backup server's Company share.

On the SBS box, we made sure to restart and verify the services running on the now restored ServerDATA partition, and have left the RAID 5 array partition alone for now.

The extra expense of having that backup DC/Data Mirror box sitting there has just paid for itself in spades. For this client, we are talking a hit against the firm in the magnitude of $1K/hour for down time. The share switch took a relatively small period of time. If SBS went down totally, the backup server is setup to bring DHCP, DNS, and a secondary ISP gateway online within very short order to at least keep the firm functional.

If things happened to end up with a nonfunctional SBS OS, we would have also had the option of bringing down one of our Quad Core Xeon 3000 series servers that sits on our bench just for this task: A ShadowProtect Hardware Independent Restore of a client's entire SBS setup. We would bring them back online fully functional on newer, albeit temporary, hardware setup until such time as a new permanent server could be installed.

Having the ShadowProtect backup setup in place gives us a good number of very flexible options to make sure that there is very little or no impact on our client's daily business operations in the event of a server failure.

Given the age of this particular 2U system, we are now talking to the partners about a replacement SBS 1U to be Swung in by the end of this week.

ShadowProtect

There is definitely one thing that has been made especially clear in the midst of all of this: The last time we experienced a catastrophic failure of this magnitude, we had BackupExec and two 72GB x6 HP Tape Libraries to fall back on. The recovery took a whole long weekend because of the sheer volume of data and struggles with BUE.

This time around, while the stress levels were and are still high, they were no where near the levels they were at when the last SBS catastrophic failure happened.

Even if the partners decide on a set of replacement hard drives as a temporary measure until their high season dies down near the end of the summer, we will have them back online with solid storage tonight. Can't say the same for tape and BUE ... especially with the volumes of data we are talking about.

ShadowProtect gives us the options ... with the hard drive replacement, between StorageCraft's ShadowProtect, and the disaster recovery training (one can connect the dots) one receives doing Swing Migrations, we will be able to do the following:
  • Restore a clean version of SBS from the previous evening's ShadowProtect image
  • Recover the Exchange databases from the ShadowProtect incremental we will do tonight on the SBS partition
  • Forklift those databases into the recovered SBS Exchange
  • Restore the ServerData and NetworkData partition data from last night's clean ShadowProtect image
  • Copy the backup server's live data mirror changes that were made by the client's users back to the SBS box.
As it turns out, we just received a call back from our partner contact at the firm. They prefer to go with the hard drive replacement until their business slows down this summer to keep things as close to status quo as possible for now.

We can do that! We have the technology and the skills! ;)

A big thanks to both StorageCraft for a great product and Jeff Middleton of SBSMigration for the awesome skill set we have gained via the SwingIt! Kit. Without this product and those skills, we would be in a very bad situation getting worse by the minute ... and very likely out of a really good client ... or ...

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Monday, 28 January 2008

SBS and Intel SE7520JR2 Warranty Replacement Experience

One of the worst warranty replacement experiences for us ever was when we needed to replace a defective Intel SE7505VB2 on a production SBS 2000 server.

Needless to say, things did not go very well. We made sure to have the replacement board's BIOS level and settings identical to the outgoing board.

Once we had things back together and the OS was booting, we hit a BSOD ... and hit a BSOD ... and hit a BSOD.

Nothing we did brought the server back. We even put the defective board back into the system (on board RAID controller was flaky) and tried to get the server back up.

We ended up spending a huge chunk of time in recovery mode to bring back that SBS 2K server.

This time around, there was a little of that "once bitten" fear for this particular project.

We are working on a 2U, SR2400 series chassis with the SE7520JR2 board in it. The board's USB ports are done for. Nothing USB would be recognized in the OS.

Once we swapped the board out, we booted the system up and were greeted with a BIOS beep code of 3. This error indicates a problem with memory.

We reseated the 4 x 1GB Kingston sticks of RAM and tried again. Still, we received the 3 beep code.

We ended up pulling 2GB out of the server to see if that worked and it did.

For whatever reason, Intel shipped this board to us with the factory original BIOS installed once we were into it.

So, we booted to a USB flash drive with the current BIOS on it and flashed away.

After booting back into the new BIOS, we changed the settings as appropriate and rebooted again. We shut the server down as soon as we saw the POST screen.

In went the extra sticks of RAM. After firing up the server and the diagnostic LEDs started dancing we knew we were in.

This time around, we now have a happy SBS 2K3 Premium SP1 server back online. *phew*

And, our USB ports are now working.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Wednesday, 15 August 2007

SBS Premium - SBS ISA Rule for Remote Management Needed

For those of us who have SBS Premium internally and manage client SBS servers, the following is an important manually created rule for allowing the 4125 RDP proxy port out:


If one does not create this rule, there is no RDP connectivity allowed out of the internal network to any external SBS server's RWW based RDP session.

For clients, this is no big deal, but for those of us who manage SBS networks, it means not being able to connect to remote SBS and XP Pro/Vista Business desktops via RWW proxied RDP.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

*All Mac on SBS posts are posted on our in-house iMac via the Safari Web browser.

Thursday, 22 February 2007

SBS is Happy Today


This particular SBS installation is SBS 2003 SP1 Premium Edition running on an Intel 2U SR2400 with dual Xeon 3.0 GHz. processors. It has two hot swap RAID arrays, one RAID 1 pair for the OS, backup files, and VSS images; and a RAID 5 array of 4 drives for their data. During peak season for them, they can grow at close to 1 GB a month, so we have a rather large RAID 5 array, with archiving of older data to keep things under control.

This SBS box is up to date, and running a large USB 2.0 external hard drive for backups. We come in every couple of weeks to swap out that drive and provide an archiving service for them if need be.

I find that this box is the one that we see the most happy reports from.

It runs very stable, and other than the occasional reboot for updates, it has never been shut down.

It is serving 14 desktops as well as a number of outside employees via RWW and some dedicated RDP boxes.

Configuration:
  • Intel SR2400 2U chassis with SATA backplane option

  • Redundant power and fan option - LX chassis model

  • 2 x 3.0 GHz. Xeon 800 FSB CPUs

  • 4 GB Kingston ECC DDR

  • 2 x 500 GB Seagate SATA in RAID 1

  • 4 x 400 GB Seagate SATA in RAID 5 (1.2 TB total use)

There is till room to grow for this particular client.

Advice: If the system is dual processor capable ... always try and plug in the second proc at build time. Have fun matching the existing one two or three years down the road otherwise. And, it will be twice the cost, and at a higher CPU stepping point that your existing board may not have a BIOS flash to support!

Been there, done that, got the T-shirt and burnt it! :D

UPDATE: It is one of those days, 1 GB a month is not a whole heck of a lot! :D

Looking over the logs, they can grow at well over that in one month, coming close to 1 GB every couple of days!

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists

    Wednesday, 21 February 2007

    SBS 2K3 RTM SP1 R2 Premium - Post install must do - Tame SQL Memory Usage

    As soon as the ToDo list is done, and one has installed the Premium Technologies, one must reconfigure the maximum amount of memory that some of the SQL Server and MSDE instances use.

    Otherwise, we end up with this:

    A gigabyte in use for one SQL instance as well as Allocated Memory messages in your Outlook folder for that client!

    BTW, I generally install a minumum of 3 GB in all of our SBS servers. Once everything is configured, I change the Allocated Memory Alert threshold to just over the installed physical memory level. Thus, I won't have to do it later after receiving the continuous e-mail messages. :D

    To tame the SQL instance's memory usage do the following:
    Right click on your taskbar and bring up the Task Manager.
    Click on the Processes Tab.
    Click on the Mem Usage category to sort by it, then click it again to bring the highest ones to the top.
    If PID is not the second column in your view, click on view and select columns.
    Put a check mark beside PID.
    Click OK.
    Note the PID of the offending SQL instance.
    Now, click:
    Start-->run-->cmd-->enter.
    tasklist /svc
    Scroll back up until you see the PID in the list.
    To the right of the PID, will be the appropriate instance to go after.
    The following screen shot has the rest:



    In this case, the offending instance belonged to the Firewall MSDE.

    So, we proceed at the command line as follows:

    osql -E -S mysbsserver\msfw
    sp_configure 'show advanced options',1
    reconfigure with override
    go

    You will get a message that the option has been changed from 0 to 1.

    Then, on to:

    sp_configure 'max server memory',128
    reconfigure with override
    go

    You will then get a message indicating the maximum allowable memory for that instance has been changed from 2 GB to 128 MB.

    It is pretty kewl to watch the memory drop from over a gigabyte to the 128 MB mark almost instantaneously!

    Exit out of the osql command shell by typing:

    exit [enter]

    Done!

    For all of our new installs, I place memory restrictions on the following instances by default:

    • MSFW
    • SBSMonitoring
    • WSUS

    If the server has less that 2 GB, I tend to be a lot more restrictive than 2 GB to 3 GB, and a lot less restrictive for installations with more than 3 GB of RAM.

    Thanks to Susan Bradley's posts here and here where I originally discovered how to deal with this issue.

    Philip Elder
    MPECS Inc.
    Microsoft Small Business Specialists