As new versions of Exchange Server are released, new clustering solutions are introduced and some are discontinued. This brings us to an interesting point in the Exchange Server lifecycle.
Currently, there are three different versions of Exchange Server in use -- Exchange Server 2003, Exchange 2007 and Exchange Server 2010. And high-availability offerings differ significantly from one version to another. Let's start by examining Exchange Server 2003.
Exchange Server 2003 doesn't really formalize server roles in the same way that later versions do, but it does make a distinction between front-end and back-end servers. While both servers can benefit from clustering, technologies used differ depending on the server's role.
Front-end Exchange 2003 servers
In Exchange Server 2003, a front-end server is nothing more than a Web server that hosts Outlook Web Access (OWA). In order to avoid having a single point of failure, and to get better performance, organizations often distribute OWA traffic across multiple front-end servers.
This is actually quite easy to do since front-end servers don't contain any mailbox or public folder data. Because all data resides on the back end, Exchange Server doesn't care which front-end server is servicing your OWA users.
There are two common methods to distribute OWA traffic across multiple front-end servers. One method is to use DNS round robin. Using this method, each front-end
The downside is that DNS round robin is not fault tolerant. The DNS server has no way to monitor front-end Exchange servers to determine if they're online. If a server were to fail, the client may be directed to the failed server. There's always the chance that the client could retry the request and be redirected to a different front-end server, but they wouldn't be completely insulated from the failure.
A better solution is to use the Network Load Balancing Service (NLBS), which is available with both Windows 2000 Advanced Server and DataCenter Server and Windows Server 2003. This technology supports up to 32 front-end Exchange servers. If a front-end server fails, existing connections to that server are lost. However, subsequent connections are automatically directed to one of the remaining servers.
Back-end Exchange 2003 servers
Back-end Exchange 2003 servers host mailbox databases and public folder databases. Because these servers contain data, they use a different clustering solution than front-end servers. Although Exchange Server 2003 allows you to create a single node cluster, this cluster alone doesn't accomplish anything.
Microsoft requires the use of a shared hard disk for back-end server clusters with more than one node. Remember that back-end servers host Exchange server databases. If a cluster node fails, another node needs to be able to pick up where the failed node left off.
This is only possible if all of the cluster nodes have access to the same data. Although each cluster node is connected to the same hard disk, only one node has access to the hard disk at a time. Microsoft recommends that if more than two nodes share a hard disk, then your servers should connect to the disk using Fibre Channel.
Note: Software-based RAID implementations are not supported for shared hard drives.
Quorum disk resources
Exchange Server 2003 clusters depend on a quorum disk resource. The quorum disk resource is responsible for maintaining the cluster's configuration, including the cluster database checkpoint, resource check points and the cluster log. Exchange clusters that use shared storage use a standard quorum -- also known as a single quorum. In this case, the quorum disk resource is hosted on the shared disk.
While this is the only type of quorum that Windows 2000 supports, Windows Server 2003 also supports the use of a majority node set quorum. In a majority node set quorum, the quorum data is stored on each individual cluster node's local hard disk. The Exchange Server databases are stored on shared disks regardless of which quorum type is used.
There are advantages and disadvantages to both types of quorums. An advantage of a standard quorum is that it requires fewer cluster nodes because the quorum resource is centrally located. But you'll also need to take measures to prevent the disk containing the quorum resource from becoming a single point of failure.
Majority node set clusters require that most cluster nodes run in order for the cluster to be functional. The majority of the nodes are defined as (N/2+1). In Exchange Server 2003, this means that a majority node set cluster needs at least three cluster nodes.
ABOUT THE AUTHOR:
Brien M. Posey, MCSE, is a seven-time recipient of Microsoft's Most Valuable Professional (MVP) award for his work with Exchange Server, Windows Server, Internet Information Services (IIS), and File Systems and Storage. Brien has served as CIO for a nationwide chain of hospitals and was once responsible for the Department of Information Management at Fort Knox. As a freelance technical writer, Brien has written for Microsoft, TechTarget, CNET, ZDNet, MSD2D, Relevant Technologies and other technology companies. You can visit Brien's personal website at www.brienposey.com.
Do you have comments on this tip? Let us know.
This was first published in February 2010