Any Exchange administrator will tell you that there are days they want to toss the application out the window. Managing Exchange Server might be pain-free most of the time, but a few aggravating quirks can overshadow those moments. Let’s take a look at what I consider the five biggest annoyances with Exchange Server 2010.
Role based access control management
Every new feature or technology I work with has to pass my “20-minute test." If I can’t figure it out in 20 minutes or less, I assume that others won’t either. Exchange 2010’s role based access control (RBAC) scheme does not pass my test.
- RBAC is comprised of management role scopes, roles, groups and members, all of which are tied together with role assignments. In other words, RBAC requires some heavy thinking before ever dropping users into groups. Additionally, in Exchange 2010 RTM, you must use the command line to manage RBAC, complicating things even further.
Improvements came in Exchange 2010 SP1 with the addition of some GUI exposure into the Exchange Control Panel (ECP). But the server still lacks a comprehensive way to manage the RBAC schema and membership.
DAG’s three-server minimum
With acronyms like SCC, CCR, LCR and SCR, sometimes a cheat sheet is necessary to understand Exchange 2007 high availability (HA). With Exchange 2010 HA and the addition of database availability groups, admins only need to know three letters: D-A-G.
Exchange 2010 lets shops simultaneously fulfill their failover and disaster recovery needs using one or more DAGs. For example, a company could position one database copy close to the primary server so that it takes over when the primary server goes down. It could place another copy offsite for backups, archiving and disaster recovery.
DAGs rely on Windows Failover Clustering to accomplish this. Failover clusters have never been known for their ease of use or administration. While the nasty DAG clustering bits are mostly obscured by the Exchange Management Console (EMC), I still sense Windows Failover Clustering’s limitations hiding just out of sight.
For example, the third server, which works in combination with each DAG requirement, can be placed on a hub transport server or a separate file server. However, to get the best level of high availability, this third server -- also known as a “witness server” -- must exist in a completely separate site than the other two DAG members. That creates additional cost, as well as a failover clustering quirk where Exchange often gets the blame.
Server virtualization and DAGs don’t mix
Almost any workload can be virtualized. What many fail to realize is that creating a virtual server is a lot more involved than performing a physical-to-virtual conversion. Enjoying all of a virtual server’s benefits means getting the server to fail over when hosts experience problems or need load balancing.
Additionally, Exchange 2010’s DAGs don’t support virtual platform high availability. Microsoft does not "support combining Exchange high availability solutions (database availability groups [DAGs]) with hypervisor-based clustering, high availability or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers." With the number of Exchange servers being virtualized today, this limitation can be a real deal breaker.
Exchange 2010 routing is forced to mimic AD site architecture
Microsoft thinks very highly of Active Directory sites. For years, AD sites have been the subject of many MCSE and MCITP test questions. Group Policy requires you to lean on this structure. Exchange 2010 sites are also linked to Active Directory sites.
This link might be acceptable for IT shops that have followed Microsoft's recommendations and aligned their physical sites with AD sites. But shops that either didn’t do this or couldn’t due to political, business, cost or inheritance reasons, may struggle to fit Exchange sites’ square peg into their AD sites’ round hole.
In my opinion, requiring that Exchange’s routing follow Active Directory’s site structure is fundamentally unnecessary. In fact, no version prior to Exchange 2007 did this. Instead, previous versions of Exchange used separate routing connectors for topology generation. Not every company enjoys a well-designed AD site structure, and forcing Exchange’s routing into the design is an annoyance many would love to eliminate.
CAS high availability is complicated
The option to split Exchange into different server roles represents a smart move for horizontal expansion. Separating out hub transport duties from those responsible for client access, for example, enables an IT shop to grow in a measured way. And although Exchange’s role separation works, some aspects were not fully fleshed out prior to release.
Take client access servers, for example. If the entire concept of role separation focuses on horizontal scalability, then why isn’t horizontal expansion a click-and-go process? Instead, creating a client access server array requires a complicated integration of Windows network load balancing (NLB), domain name server (DNS) modifications and manual mailbox database updates.
To further complicate matters, because NLB and Windows Failover Clusters can’t coexist, you can’t conserve hardware through a CAS array built on DAG-enabled mailbox servers. This unfairly punishes IT shops that aren’t big enough to need full role separation, but are big enough to need the same level of high availability.
Exchange Server quirks run deep
Because Microsoft has a history of basing its technologies on other technologies, many Exchange Server quirks can be traced back to underlying support and services. In fact, if you dig deeper into Exchange’s biggest shortcomings, you’ll find that many of them exist because of underlying deficiencies.
Greg Shields, MVP, is a partner and principal technologist with Concentrated Technology. An IT industry analyst, author, speaker and trainer, you can find Greg at concentratedtech.com.
This was first published in April 2011