Ever since Exchange 2010, load balancing has become an essential part of highly available Exchange Server infrastructures....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
While the new Exchange 2013 architecture allows for other, perhaps easier, ways to support load balancing requirements, there are still various options to choose from. In part one of this two-part series, we'll explore the basics and a few different load-balancing options.
As with most IT topics, it's best to start with the basics. This is no different with load balancing for Exchange 2013. When setting up load balancing for your Exchange infrastructure, the following concepts will always be involved:
- A virtual service (VS) -- also often called a virtual IP -- which is configured on the load balancer. The VS is usually a unique combination of an IP address and a TCP port, like 192.168.10.100 and port 443.
- Each VS contains a set of rules that defines the load balancing logic that it will use. Along with the set of rules, each VS also contains two or more "real" servers. Traffic that hits the load balancers is forwarded there.
- Each VS also contains what is known as a health check. The load balancer executes this health check against the underlying servers in order to determine whether they are available to route traffic to.
For example, if a server is down, it is removed from the pool of servers and will not receive traffic until it is available. Depending on the make and model of your load balancer, a health check may be anything from a simple ping to an advanced script that checks end-to-end functionality of the VS on the real servers.
When you put these concepts together, you end up with a schematic overview that looks like what you'll see in Figure 1.
Figure 1. Overview of an Exchange 2013 load-balancing setup.
Layer 4 vs. Layer 7 load balancing
Load balancing typically operates at two different layers: Layer 4 (L4) and Layer 7 (L7). When talking about load balancing, we're really talking about network, right? Try hard and I’m sure you'll remember some -- if not all -- of the layers of the famous OSI model. If you don't, worry not. Here's a quick summary.
With L4 load balancing, you're actually operating in the most rudimentary way possible. The load balancing device is only aware of two things: the destination IP address and the TCP port where traffic hits the load balancer. The load balancer doesn't know -- or care -- about the traffic or content passing through.
As a result, the load balancer can -- and will -- only route and load balance traffic based on the unique combination of the destination IP address and TCP port. Assuming that a service is only available on a single combination of an IP address and TCP port, the load balancer can only maintain a single health check and other settings such as persistence -- also known as stickiness -- which control affinity between clients and a particular server, per service.
L7 load balancing, on the other hand, does allow the load balancer to read the data in the packets flowing through it. Doing so lets the load balancer use a routing and load-balancing logic that is not only limited to the IP address and port, but also based on the traffic content.
As a result, the load balancer can make different routing decisions based on traffic content. So how exactly does this apply to Exchange 2013? Well, a load balancer in Exchange 2013 can distinguish between OWA 2013 traffic and Outlook Anywhere traffic. Therefore, the load balancer sends traffic for OWA to one set of servers and traffic for Outlook Anywhere to a different set.
Load balancing and SSL
Exchange secures HTTP traffic with secure socket layers (SSL) for most workloads. The downside of load balancing SSL traffic over L7 is that the load balancer needs to decrypt traffic before it reads it.
Note: This is a requirement if you want your load balancer to make routing decisions based on content.
The decryption process -- and, by definition, the re-encryption of traffic before sending it to Exchange again -- may end up using a good deal of resources on your load balancer (mostly CPU and memory). The process of decrypting traffic on the load balancer but not re-encrypting it is known as SSL offloading.
Luckily, many devices include a dedicated CPU that's only used for decrypting/re-encrypting SSL traffic. Unfortunately, they come with a higher price tag.
Exchange 2013 load balancing: Physical, virtual or WNLB?
The decision of whether to choose a physical (hardware) load balancer, a virtual appliance, or an alternative solution like Windows Network Load Balancing (WNLB) depends entirely on your own specific requirements and personal biases.
After you're finished with the introduction
Check out your Exchange 2013 load balancing options
Though it's supported in Exchange 2013, I am not a fan of WNLB and do not recommend it. When it comes to using WNLB, there are several factors -- especially in combination with a virtualized environment -- that I simply don't care for. On top of that, WNLB doesn't scale very well for larger environments and can only check the availability of the underlying real servers through a simple ping.
If you intend on using L4 or L7 load balancing for Exchange, I highly recommend either a hardware load balancer or virtual appliance.
Your choice will come down to your environment and, more importantly, your budget. Virtual appliances can leverage the high-availability features of your virtual infrastructure, whereas normally you'd need to buy an additional physical appliance to make it redundant. Though virtually unlimited, the performance of a virtual appliance is a close one-to-one relationship with the hardware and performance of your virtual server farm.
About the author:
Michael is a technology consultant, Microsoft Certified Trainer and Exchange MVP from Belgium, mainly working with Exchange Server, Office 365, Active Directory and a bit of Lync. Michael has been active in the industry for 12 years and developed his love for Exchange back in 2000. He is a frequent blogger, member of the Belgian Unified Communications User Group Pro-Exchange and a regular contributor to The UC Architects podcast.