Prescriptive guidance about Active Directory hasn’t generally been updated since the January 21, 2005 TechNet Active Directory Best Practices article. Some of the legacy information no longer applies, but we see many of the basics being ignored putting organizations at high risk:
- Two Domain Controllers. Always have a second domain controller for Active Directory, DNS, and DHCP failover. If you only have one domain controller and it fails, no one can access either the network or the Internet.
- Authoritative Backup. Backup the system state and not simply the virtual image of your domain controllers. Without an authoritative backup, you cannot restore Active Directory and must create a new domain (even if named the same) and rejoin all workstations for potentially days to weeks of user profile and application problems. If you don’t have an authoritative backup when the main domain controller fails, but do have a second domain controller then you face the pending emergency tasks of USN rollback error:
- Seize roles on another domain controller
- Export any DHCP scopes
- Manually remove the problem domain controller from Active Directory and shutdown
- Build another domain controller
- Import DHCP scopes
- Test proper Active Directory replication and network logon
- Standard networking. These rules are law with few exceptions:
- Only one network connection enabled and it should be listed first in priority
- Single static IP Address assigned with valid subnet mask and default gateway
- DNS 1 is the IP Address of the machine and DNS 2 is the IP Address of the second domain controller
- IPV6 is enabled with settings set to obtain
- There should be no forwarding addresses in DNS to other servers internal or external
- Firewall On. Domain controllers should have the firewall on as the most sensitive machines on the network, containing the master user security database and network configuration. The firewall should be on for all devices for encapsulation to prevent rampant hacking and virus outbreak. Backup and anti-virus programs generally will not install without the firewall enabled.
- Anti-virus Installed. If a domain controller is accessed by a malicious intruder, you should have anti-virus to prevent installation of a rogue Trojan program or Rootkit. Proper exclusions of files or folders in use do not slow response or interfere with network access on a domain controller.
Active Directory and associated network security are the only things that should run on a domain controller. Running applications, hosting websites, and sharing files should be avoided on a domain controller. Microsoft also doesn’t support installation of Exchange (or SQL Server) on a domain controller.
You are likely one of those too. You want terabytes of data accessible anywhere on any device. The same is true of your most critical applications. If you inadvertently delete something, you want the information restored in just a few minutes.
That magic is called a snapshot. For a Storage Area Network (SAN) that holds your data and has regularly scheduled snapshots, a previous copy of your data is available and can be rolled back to that point in time. However, there are some important aspects to understand about snapshots and these are the key take-aways:
- Without the current copy of your data, snapshots are worthless as they are dependent upon the differences between the original information.
- Snapshots for virtual machines allow rollback for any changes, but cannot be used by themselves as a functional virtual image.
- Snapshots are not a backup of server system state, especially for critical roles like domain controllers.
- Snapshots do not clear the transaction logs of Exchange databases.
- For all of the above reasons, online backup should backup system states and databases independent of server incremental snapshots.
This post was inspired by a customer who recently and painfully experienced why snapshots are not backups. The customer was updating an application on their main domain controller (not a good practice either and a topic for another time) which went awry. After restart, the NETLOGON service would not remain started and Active Directory would not replicate with the secondary domain controller.
The customer lamented that they had performed the application update on several servers with no issues and had not taken a snapshot of the virtual machine. It wouldn’t have mattered. The problem is known as USN rollback where the server was no longer recognized as the master security database for the network. With a system state backup, the issue could have been resolved in a few minutes with an authoritative restore.
Instead an emergency project was required to:
- Create a new virtual server
- Export DHCP scopes
- Manually remove the errant server from Active Directory
- Seize the FSMO roles on the secondary domain controller
- Shutdown and destroy the errant server
- Promote the new server as a domain controller
- Change the IP address of the new server to the previously errant server
- Broadcast to users to shutdown workstations for cutover
- Import DHCP scopes
- Verify Active Directory replication
Fortunately, the secondary domain controller processed user logons during this scenario, even though no changes could be made to any Active Directory accounts. If the customer had not had a secondary domain controller, they would have had the ugly prospect of building a new domain controller and rejoining every user and device to the domain with weeks of profile and application hell.
When asked about other backup scenarios like Exchange, the customer said they used Symantec Backupexec as it was easier to restore than using their current online backup. More likely, someone on their staff discovered the Exchange logs weren’t being cleared and they had no way of restoring an individual mailbox using a server snapshot.
If you’re depending on snapshots only, then heed the warnings above and know you’re operating under high risk of downtime and data loss. The simple fact is snapshots are not backups.
Due to the volume of questions, these bullet points are provided to alleviate the confusion about what is a “Microsoft account“:
- “Microsoft account” is the new name for what used to be called a “Windows Live ID.”
- A Microsoft account is NOT a business “Organizational account” for Azure, CRM Online, Intune, or Office 365.
- Your Microsoft account is the combination of an email address and a password that you use to sign in to services like Outlook.com, OneDrive, Windows Phone, or Xbox LIVE.
- If you use an email address and password to sign in to these or other services, you already have a Microsoft account—but you can also sign up for a new one at any time.
- When you setup a Microsoft account, you may use your existing e-mail address or setup a new Hotmail.com or Outlook.com address.
- It’s common to have a Microsoft account for work using your business e-mail address, as well as a personal Microsoft account for consumer purposes using your personal e-mail address or a new Outlook.com address.
- You can also use a Microsoft account to sign in to any PC running Windows 8.
- In addition, Microsoft accounts are used for services like Microsoft licensing and secure portals or encrypted e-mail.
- Over time, all Microsoft consumer services will be switching from the old name to the new one.
- You might continue to see mentions of “Windows Live ID” instead of “Microsoft account” for a while—for example, on xbox.com or windowsphone.com—but the names mean the same thing, and the services will be updated soon.
- It is strongly recommended that you follow the steps to help protect your account with two-step verification and account recovery.
According to industry experts, Apple is 10 years behind in security. Apple.com currently asks “What will your verse be?” on the home page in the wake of a huge security breach. The rise in popularity of Apple products in the last 5-7 years has primarily been centered around music, so it should not be a surprise that the Apple iCloud has serious security vulnerabilities.
Unlike Microsoft or Google, Apple iWork is still in beta and has achieved none of the high compliance standards required for commercial use or proven years of safe productivity across millions of users in the cloud. To help protect business users who may be using Apple devices, the following steps outline how to disable Apple iCloud:
- Be sure to have photos saved on your PC or another storage device first.
- Go to either the Photo Stream tab in your photo gallery in iOS or the Photo Stream option in iPhoto on Mac OS to view existing photos in Photo Stream. Here you can manually delete existing individual photos or whole albums.
- Then go to the settings menu on your iPhone or iPad (‘System Preference’ on Mac OS)
- Open the ‘iCloud’ category
- Switch off Photo Stream (which automatically uploads photos to iCloud)
- Repeat this on all your Apple devices
To be fair, Apple does offer Two Factor Authentication like other major providers. However, Apple continues to be very new at online services and Apple iCloud has been reported to be unsafe by local news stations.
There are only two goals when configuring server disks: risk avoidance and best performance. Regardless of usage or application, some best practices are universal.
- Separate system and data. Discrete containers for the operating system/applications and data simplify maintenance, security, and disaster recovery.
- Moderate cost. With this strategy, you spend moderate cost for hardware while avoiding expensive reconfiguration or disaster recovery using other approaches.
- Hardware RAID preferred. Implementing redundant disks with hardware is significantly faster than software RAID with easier maintenance and fewer failures.
- RAID 1 system container. Mirrored system drives allow a server to run with no downtime upon the failure of one drive.
- Entire disk for system. You cannot extend a system partition, so it is recommended to format the full disk for the operating system.
- OS and applications only. The system partition should not contain data. Installing applications on special role servers like domain controllers and firewalls should be avoided.
- Buy the best disks you can afford. SSD is the most expensive. SAS is middle of the road for cost, but much better performance than SATA.
- RAID 5 most common. Striping with parity requires a minimum of 3 drives and continues operating with the loss of one drive. This configuration generally provides more useable disk space than other approaches, except RAID 0.
- RAID 50 best speed/space/redundancy combination. Requiring at least 6 disks, this approach uses RAID 0 striping across 2 or more RAID 5 elements. The collection may lose a drive in each RAID 5 element and still function. For a large number of high-capacity drives, a Storage Area Network with RAID 50 is recommended.
- RAID 10 for high I/O. A stripe of RAID 1 mirrors requires at least 4 drives. RAID 10 can function with the loss of a drive loss in each RAID 1 element, but has approximately half the useable disk space of RAID 50. This configuration provides the highest throughput of any other configuration except RAID 0 for e-mail and database applications.
- RAID 6 not recommended. Briefly popular as the assumed next version above RAID 5, this approach comprises striping with double parity and fault tolerance of 2 drive loss. Unfortunately, RAID 6 has marginal comparative useable disk space and comes with a significant performance penalty.
- RAID 0 fastest and highest risk. With simple striping across 2 or more disks, you achieve the highest throughput and the most useable disk space. However, the loss of a single drives results in complete data loss.
This guidance is provided as part of proven managed services standard operating procedures since 1996. For example, solutions like virtualization consist of RAID 1 system container for HyperV and RAID 5 data container for virtual servers.
RAID – Redundant Array of Inexpensive Disks
Hybrid RAID – Nested RAID levels
One of the more common requests we get are for quotes on Microsoft Office. The usual reasons are:
- Our version of Office is no longer supported or doesn’t work with other software
- There is no standardization causing confusion and productivity loss for employees
- We’re having problems converting or opening newer Microsoft Office documents
Hands down Office 365 is the best way to purchase Microsoft Office, either separately or bundled subscriptions like Office 365 E3. This approach is the most cost-effective and provides an always current version of Microsoft Office for up to 5 devices per user.
Office 365 Enterprise E3 ($240 per user per year)
Office 365 Pro Plus ($144 per user per year)
Let’s take a typical scenario of an organization with 100 users and a mixture of Office 2003, Office 2007, and Office 2010. You basically have 4 purchase options:
- Retail boxes from a retailer like Best Buy ($399.99)
- Open license from a distributor ($508)
- Open license with Software Assurance upgrade protection ($803)
- Office 365 subscription that includes Microsoft Office (starting at $144 per year)
Microsoft Open License Estimated Retail Price List is published publicly.
Retail boxes are difficult to manage with the separate keys for each license and cost nearly 3 times of an Office 365 subscription with no upgrade protection. Similarly, an Open License is approximately 4 times the cost of an Office 365 subscription and no upgrade protection.
Given the rapid pace of technology only increasing, it’s unlikely that you can wait 4 or 5 years to upgrade any more. Open Licenses with Software Assurance definitely make no sense with a 2 or 3 year renewal, at almost 6 times the cost for the same upgrade protection provided in Office 365.
So, the choice boils down to buying Open licenses you own versus renting a monthly Office 365 Subscription:
$50,800 – Office Professional Pro Plus for 100 Users Open License
$14,400 – Office 365 Pro Plus for 100 Users per year
Office 365 is only a fraction of the cost annually for 5 times the value. Be a hero with your management and show them the cheapest way to buy Microsoft Office.
As the Internet churns forward at warp speed, it’s even more important to understand what technology is used for your website. We recently updated our website and published managed services first bootstrap website.
Even though local testing functioned properly, the public site would not display. All of the files were verified as copied to the web server with FTP so something else was wrong. Upon contacting the host, they mistakenly reported that no valid web files were present. The technology was new enough that the host simply wasn’t familiar with modern conventions.
When we asked what version of Windows was hosting our site, it was revealed that our site had been running on the over 10 years old Windows Server 2003. As soon as we purchased hosting on a newer Windows platform, the site was published and publicly accessible.
After checking a few dozen customer sites, we found they were also running older versions of Windows or Unix/Linux. So here are our recommendations:
- Check your website platform. Go to Builtwith and lookup your website. The service will display a full technology profile including: web server version, name server providers, email services, SSL certificate, frameworks, analytics, and more.
- Increase search performance. Newer platforms are inherently faster and more responsive web sites are listed ahead of slow or broken sites in search results.
- Provide better protection. Windows Server 2003 is no longer supported or updated by Microsoft posing a security risk for users. Similarly, Unix and Linux web servers should be patched and latest versions for increasing vulnerabilities like Heartbleed or Bash.
Builtwith may also be used for regular competitive or industry analysis. Correspondingly, you’ll likely want to migrate or re-publish your website to a newer platform every 3 – 5 years.
Matrixforce has no affiliation and derives no revenues from Builtwith.