Despite many discussions about whether you should or should not virtualize Exchange and the fact that Exchange 2013 is designed to run on physical machines, it should be very clear that there is nothing wrong with wanting to virtualize Exchange Server 2013. That being said, it doesn't mean that it's always the right choice. Let's have a look at what might influence the decision to virtualize Exchange and if it makes sense do to so on...
a global level.
Exchange 2013 virtualization involves yet another storage discussion
When talking about Exchange 2013 virtualization, you'll inevitably talk about storage. This is one of the most discussed topics when it comes to designing an Exchange messaging environment.
A virtual environment typically uses shared storage to enable certain features such as live migration, or vMotion for the VMware fans among us. This feature flexibly moves virtual machines (VMs) from one host to the other and offers several benefits; resource allocation and high availability (HA) are the most important aspects.
Series on Exchange 2013 virtualization
This is part one in a series about virtualizing Exchange 2013. Stay tuned for part two, which covers data deduplication, high availability and the arguments to consider as you decide if Exchange 2013 virtualization is right for your organization.
While Exchange 2013 can live on SAN (at least when the storage meets the minimum requirements), Microsoft designed it to run on local JBOD storage. This article won't discuss which one is better, but using JBOD for Exchange 2013 makes sense from both a cost and feature perspective. Exchange MVP and SearchExchange contributor Steve Goodman wrote a similar article discussing the sense and nonsense of JBOD for Exchange 2013 that's worth the read.
Additionally, many virtual environments use Network File System (NFS) as the file system for their shared storage. While this is fully supported for the Windows operating system, NFS cannot be used to host any Exchange-related data. This means you would potentially have to configure your storage to use iSCSI or Fibre only for Exchange. You'll likely determine that NFS works fine if you try using it, but don't take this as an invitation to start using NFS. There's more to the story, and Microsoft doesn't support it. The NFS discussion has only recently reached new heights, which probably led to Microsoft reconfirming its support statement while providing some more background information at its most recent Microsoft Exchange Conference.
Resource allocation for Exchange 2013 virtualization
Organizations often decide to invest in virtualization platforms to make better use of resources. By having multiple VMs coexist on a single physical host, you can make use of the resources on that host more efficiently. This approach probably works well with the majority of applications out there, but Exchange isn't particularly keen on sharing its resources.
Although the underlying virtualization platform will obfuscate sharing resources with the VM, you're still doing so in essence. This statement might be biased, but when talking to virtualization admins, the sharing of resources is a main reason I get when asking organizations why they do virtualization.
When I talk to a customer about virtualizing Exchange, I tell them to allocate fixed resources for it. This is done by giving the VM prioritized and guaranteed access to resources through a process called resource prioritization; depending on the hypervisor platform, the actual feature name might vary. This allows you to reserve a fixed amount of resources for the VM so it always has access to these resources. However, this strokes with one main benefit of virtualization: flexibility.
To best explain this, consider the following scenario. You designed an Exchange 2013 environment consisting of four multi-role Exchange servers, each needing a whopping 96 GB of RAM and eight virtual CPUs (vCPUs). Even though these requirements are hypothetical, they are not uncommon. Sizing Exchange in a virtual environment is no different than sizing in a virtual environment, so make sure Exchange always has access to these resources even when other VMs move between the different hosts. This means you would need to reserve 96 GB of RAM on each physical machine. That's 96 GB of RAM just sitting there without you being able to use it.
The same goes for the vCPUs. I've seen cases in which a single physical host was used for a single Exchange VM. Unless your virtual environment can handle these tasks, buying new physical machines to host a single virtual Exchange server isn't cost-effective, especially when you consider that you penalize your hardware by roughly 10% for the overhead the hypervisor causes. If that's the case, why not skip virtualization all together and just run Exchange on these physical servers instead?
But it's not unthinkable. I've seen physical hosts with several hundred gigabytes of RAM and a massive amount of compute power. Truth be told, these are rather rare compared to the mainstream machines, which also run Exchange 2013 if running physically. Memory and computer power aren't the only important elements. Storage also comes into play, and you might have already invested in a SAN with several TBs waiting to be used.
About the author:
Michael Van Horenbeeck is a technology consultant, Microsoft Certified Trainer and Exchange MVP from Belgium, mainly working with Exchange Server, Office 365, Active Directory and a bit of Lync. He has been active in the industry for 12 years and is a frequent blogger, a member of the Belgian Unified Communications User Group Pro-Exchange and a regular contributor to The UC Architects podcast.