Virtualization is an awesome technology. Last weeks I visited a customer and we took a walk through their data centers. While standing in one of their data centers I thought: Imagine that all server, that they are currently run as VMs, would be physical?. I’m still impressed about the influence of virtualization. The idea is so simple You share the resources of the physical hardware between multiple virtual instances. I/O, network bandwidth, CPU cycles and memory. After nearly 10 years of experience with server virtualization I can tell that especially the memory resources is one of the weak points. When a customer experiences performance problems, they were mostly caused by a lack of storage I/O or memory.
The reason for this post
Today I like to write a bit about memory management of hypervisors, in this case the memory management of VMware ESXi (the trombone in the flutes orchestra) and Microsoft Hyper-V. They are the leading hypervisors on the market (source: Magic Quadrant for x86 Server Virtualization Infrastructure). But there is another cause, why I took a closer look at the memory management of Hyper-V: Microsofts support policies and recommendations for exchange servers in hardware virtualization environments. In the run up to a Exchange migration project I took a quick look into Microsofts TechNet, just to verify some questions. And then I stumbled over this statement, valid for Exchange 2013:
Exchange memory requirements and recommendations
Some hypervisors have the ability to oversubscribe or dynamically adjust the amount of memory available to a specific guest machine based on the perceived usage of memory in the guest machine as compared to the needs of other guest machines managed by the same hypervisor. This technology makes sense for workloads in which memory is needed for brief periods of time and then can be surrendered for other uses. However, it doesn’t make sense for workloads that are designed to use memory on an ongoing basis. Exchange, like many server applications with optimizations for performance that involve caching of data in memory, is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical or virtual machine on which it’s running. As a result, using dynamic memory features for Exchange isn’t supported.
There are similar statement for Exchange 2007 and 2010. At the first moment I thought “Okay, looks like the Exchange-on-NFS thing”. Check Josh Odgers blog post if you want to know more about this Exchange-on-NFS thing. If you’re running your Exchange on NFS, don’t read it. There is reason to believe that you will go out and shoot a Microsoft engineer after reading it. After a couple of seconds I thought “What does dynamic memory feature mean?” This was the beginning of a journey into the deep of hypervisor memory management.
Memory is the only component in a server that can’t be oversubscribed. That’s plausible, because you can schedule multiple VMs to a single CPU core using a time-slice mechanism. But you can’t share a memory cell, if a VM has stored data in it. Now you have a number of options. You can configue a static memory size for each VM. If you have 32 GB memory in your virtualization host, you can run e.g. two VMs with 8 GB and four VMs with 4 GB memory. But what if a VM needs more memory? Either you reduce the amount of memory for the other VMs or you have to add memory to your host. Not very flexible. Now suppose that the VMs take full advantage of its configured memory only very rarely. In this case we can use the unused memory for the running or new VMs. We can oversubscribe the memory of the physical host. But this only works as long as the number of actively used memory is less or equal the memory size of the host. We have to react only in the case that a VM wants to take full advantage of its configured memory.
How does VMware ESXi manages its memory?
VMware ESXi uses four technologies to manage its memory:
- Transparent Page Sharing (TPS)
- Memory Compression
Since the introduction of large pages (2 MB memory pages), TPS is only used under memory contention (thanks to Manfred for this hint). With TPS the memory is divided into pages and the hypervisor checks, if some of the pages are identical. If this is the case, the hypervisor stores only one copy of page and sets pointers to the identical ones. If you’re running a lot of similar VMs, then TPS can reduce the amount of used memory. Ballooning uses a special driver inside the VM. The hypervisor can use this driver to allocate memory inside of a VM. The OS inside the VM then frees up memory that isn’t used. The hypervisor then can reclaim that memory. Memory compression is used shortly before the hypervisor has to swap to disk. If a memory page can be compressed by at least 50% it’s held in the memory compression cache (10% of the memory is reserved for this). Otherwise it’s swapped to disk. Swapping is the last technology. If there is no more memory left and the other technologies are used up to their maximum, memory pages are swapped out to disk. Please note, that this is a very rough summary. For more information, please check VMware vSphere Resource Management Guide. With this techniques you can easily create four VMs with each 16 GB memory on a host with 32 GB memory. Important is, that the VMs can only allocate less then 32 GB memory, because the hypervisor also needs some memory for itself and virtualization overhead. A VM needs at least the amount of overhead memory to start on a VMware ESXi.
How does Microsoft Hyper-V manages its memory?
Until Windows Server 2008 R2 SP1 Microsoft Hyper-V was unable to do dynamic memory management. Only static memory allocation was possible. To stay with the example, it wasn’t possible to start four 16 GB VMs on a 32 GB host with Hyper-V. During the power-on, Hyper-V reserves the configured memory of the VM, which makes unused memory unavailable for other VMs. With Windows Server 2008 R2 SP1 Microsoft added dynamic memory management to Hyper-V. Since then you can enable dynamic memory on VM-level. After anabling it for a VM you can set a so called “Startup RAM”. This is the amount of memory which is assigned to the VM during startup. This is because Windows needs more memory during startup than the steady state (Source). It should be set to the amount of memory which is needed to run the server and application with the desired performance. You can also configure a “Minimum RAM”. This is the amount of memory up to which the hypervisor can reclaim memory using a ballooning technique. Because you can configure a “Minimum RAM”, you can also configure a “Maximum RAM”. This is the amount of memory up to which the hypervisor can add memory to the VM. And now comes the interesting part. Any idea how the hypervisor adds memory to the VM? No? He’s using memory hot-add! If the VMs needs more memory, it’s simply hot-added to the VM. This explains why the OS inside the VM has to support memory hot-add in the case you want to use dynamic memory. And it explains also why some applications are not supported with Hyper-V dynamic memory.
IMHO it’s not the dynamic memory managemet itself, which leads to Microsofts support statement. It’s the way how Microsoft Hyper-V manages the dynamic memory. Exchange checks the configured memory during the start of its services. If the memory size is increased after the start of the services, Exchange simply doesn’t recognizes it. On the other hand Microsoft SQL Server can profit from hot-added memory. Because of this, dynamic memory is supported with Microsoft SQL Server (check question 7 and answer 7 in the linked KB article). VMware ESXi doesn’t hot-add memory to a VM. Therefore you have to configure a suitable memory size. If you hot-add memory, then the same restrictions apply as for Hyper-V. Instead of relying on memory hot-add you can configure the suitable memory size when using VMware ESXi. But always remember: Memory oversubscription can lead to performance problems, if the VMs try to allocate the configured memory! Best practice is not to oversubscribe memory.
Memory management in ESXi and Hyper-V strongly differs. There’s no better or worse. They are too different to compare and they are developed for different use cases.
Feel free to follow him on Twitter and/ or leave a comment.
Latest posts by Patrick Terlisten (see all)
- Notes for a 2-Tier Microsoft Windows PKI - March 7, 2019
- Veeam B&R: “Rescan of Manually Added” failed - February 25, 2019
- Windows NPS – Authentication failed with error code 16 - February 25, 2019