The time when IT departments had virtually unlimited budgets to build and support enterprise options is long gone. Now IT departments often struggle to keep a balance between cost, quality and functionality. And many of those functions are often reduced to the bare essentials. But that may not necessarily be a bad thing.
This "bare essentials" approach reduces an IT department budget footprint, but that doesn't mean it's the most cost-effective over time. These systems might cost more to maintain, or they might lose you money because of decreased end-user productivity.
If you want a cost-effective on-premises Exchange deployment, keep the following things in mind. You should deploy Exchange using
Efficient systems cost less in the long run.
While I'm sure there are exceptions to the theory that efficient systems cost less to maintain, it holds up well against information and communications technology systems and for your IT department budget.
A single or even a double server outage should not impact the Exchange service in any way.
The key to achieving the highest possible levels of efficiency is to keep things as simple as possible and to automate as much as possible. As you deviate from the default system functionality, you introduce more complexity and higher building and maintenance costs.
When talking about default Exchange deployments, we're not referring to the server's ability to adhere to a set of requirements or regulations; we're referring to a default set of components and guidelines that make up your enterprise's messaging option. A default Exchange deployment will closely follow Microsoft's best practices, which are bundled in the Product Line Architecture for Exchange.
Without going into too much detail, you must look at deploying Exchange on JBOD storage for Exchange 2010 -- or even better for Exchange 2013. This is because Exchange was built with JBOD in mind and is the simplest way to achieve the best results.
The default configuration is the simplest option.
Many deployments deviate from the default configuration in many ways, not the least of which is storage. Many companies steer away from using JBOD and insist on using RAID or even SAN storage. But when you do a cost analysis, there are only few cases that can be made for your IT department budget where a RAID or SAN configuration makes more sense.
Still, there are two sides to every story. If you want to deploy on JBOD, follow the design guidelines. This means you should account for at least three database copies. For many companies, having three or more database copies is a substantial increase over what they already have, and the additional cost of more servers might undo profits gained from running on JBOD. Scaling is important to benefit the most from a JBOD deployment.
Leverage scale to reduce costs.
If an enterprise wants to achieve the highest possible cost reduction in its IT department budget, it has to leverage scale. This is only possible when an organization is big enough to do so, which is a main reason why smaller environments have a more difficult time adhering to a default deployment and still have significant cost reductions. The biggest differentiator is not the cost to build the option, but rather the lower cost to support it.
The best example for scaling is Office 365. Microsoft can provide mailboxes at such an affordable price because it leverages the extremely large scale in which Office 365 operates and combines it with simple architecture. Scaling out makes sense even for companies considerably smaller than Office 365.
Choose a software-defined or a hardware-defined approach.
The question that's always asked is whether software should be in control of an application's availability rather than hardware. You used to have to rely on hardware-based resiliency mechanisms like RAID, NIC teaming or SAN replication to provide resiliency for the application running on top of it. But the application itself was unaware of the high-availability capabilities of the platform it ran on. This occasionally invoked weird behavior, especially if the infrastructure layer was already behaving badly.
This approach worked well in the past and continues to work well for many applications today. But Exchange is a system that had to evolve because hardware-based options weren't able to keep up for functionality. Exchange is now an application fully capable of maintaining its own availability, regardless of the hardware platform it runs on. Combine this software-defined availability with low resource requirements and you have the option to deploy Exchange on relatively affordable hardware.
An obvious advantage of a software-defined approach is that the application is self-aware and, unlike a hardware-based approach, can take into account software component failures. A hardware-based approach is oblivious to applications running on top of it, which is why hardware-based resiliency protects well against hardware issues of all sorts but doesn't protect an application or its data from software failures. It also can't guarantee the service's availability.
A combination of both approaches usually results in a highly available option, but it comes at the expense of total cost. Microsoft's approach, which is largely inspired by Office 365, is to not really care whether a server has hardware issues. By not caring, I don't mean you should avoid doing something if a server fails, but a single or even a double server outage should not impact the Exchange service in any way. This can only be done by totally abstracting the application layer from the infrastructure underneath. And to keep costs as low as possible, the preferable path is to use the lowest cost enterprise hardware.
Click here for part two of building an affordable on-premises Exchange deployment, which covers cost analysis of different deployments, automating and standardizing tasks and removing backups.
About the author:
Michael Van Horenbeeck is a technology consultant, Microsoft Certified Trainer and Exchange MVP from Belgium, mainly working with Exchange Server, Office 365, Active Directory and a bit of Lync. He has been active in the industry for 12 years and is a frequent blogger, a member of the Belgian Unified Communications User Group Pro-Exchange and a regular contributor to The UC Architects podcast.
This was first published in December 2013