Tag Archive for memory

Tune ESXi VM Memory Configuration

tools

Tuning Configuration

  • Minimum memory size is 4MB for virtual machines that use BIOS firmware. Virtual machines that use EFI firmware require at least 96MB of RAM or they cannot power on.
  • The memory size must be a multiple of 4MB
  • vNUMA exposes NUMA technology to the Guest O/S. Hosts must have matching NUMA architecture and VMs must be running Hardware Version 8

numa

  • Size the VM so they align with physical boundaries. If you have a system with 6 cores per NUMA node then size your machines with a multiple of 6 vCPUs
  • vNUMA can be enabled on smaller machines by adding numa.vcpu.maxPerVirtualNode=X (Where X is the number of vCPUs per vNUMA node)
  • Enable Memory Hot Add to be able to add memory to the VMs on the fly

HotAdd

  • Use Operating Systems that support large memory pages as ESXi will by default provide them to those O/S’s which request them
  • Store a VMs swap file in a different faster location to the working directory
  • Configure a special host cache on an SSD (If one is installed) to be used for the swap to host cache feature. Host cache is new in vSphere 5. If you have a datastore that lives on a SSD, you can designate space on that datastore as host cache. Host cache acts as a cache for all virtual machines on that particular host as a write-back storage for virtual machine swap files. What this means is that pages that need to be swapped to disk will swap to host cache first, and the written back to the particular swap file for that virtual machine
  • Keep Virtual Machine Swap files on low latency, high bandwidth storage systems
  • Do not store swap files on thin provisioned LUNs. This can cause swap file growth to fail.

swapfile

  • You can use Limits, Reservations and Shares to control Resources per VM

Memoryres

Tune ESXi Host Memory Configuration

tools

Tuning Options

  • In addition to the usual 4KB memory pages, ESX also makes 2MB memory pages available (commonly referred to as “large pages”). By default ESX assigns these 2MB machine memory pages to guest operating systems that request them, giving the guest operating system the full advantage of using large pages. The use of large pages results in reduced memory management overhead and can therefore increase hypervisor performance.
  • Hardware-assisted MMU is supported for both AMD and Intel processors beginning with ESX 4.0 (AMD processor support started with ESX 3.5 Update 1). On processors that support it, ESX 4.0 by default uses hardware-assisted MMU virtualization for virtual machines running certain guest operating systems and uses shadow page tables for others
  • Carefully select the amount of memory you allocate to your virtual machines. You should allocate enough memory to hold the working set of applications you will run in the virtual machine, thus minimizing swapping, but avoid over-allocating memory.
  • Understand Limits, Reservations, Shares and Working Set Size

limit

  • If swapping cannot be avoided, placing the virtual machine’s swap file on a high speed/high bandwidth storage system will result in the smallest performance impact. The swap file location can be set with the sched.swap.dir option in the vSphere Client (select Edit virtual machine settings, choose the Options tab, select Advanced, and click Configuration Parameters)

swapfile

  • A new feature that VMware introduced with vSphere 5 is the ability to swap to host cache using a solid state disk. In the event that overcommitment leads to swapping, the swapping can occur on an SSD, a much quicker alternative than traditional disks. Hightlight a host > Configuration > Software > Host Cache Configuration

hostcache

  • Use the Mem.ShareScanTime and Mem.ShareScanGHz advanced settings to control the rate at which the system scans memory to identify opportunities for sharing memory. You can also disable sharing for individual virtual machines by setting the sched.mem.pshare.enable option to FALSE (this option defaults to TRUE)

memshare

  • Don’t disable these other memory over-commitment techniques – Ballooning, Page Sharing and Memory Compression

Memory Over Allocation for VM’s – What Happens?

ESX employs a share-based allocation algorithm to achieve efficient memory utilization for all virtual machines and to guarantee memory to those virtual machines which need it most

ESX provides three configurable parameters to control the host memory allocation for a virtual machine

  • Shares
  • Reservation
  • Limit

Limit is the upper bound of the amount of host physical memory allocated for a virtual machine. By default, limit is set to unlimited, which means a virtual machine’s maximum allocated host physical memory is its specified virtual machine memory size

Reservation is a guaranteed lower bound on the amount of host physical memory the host reserves for a virtual machine even when host memory is overcommitted.

Memory Shares entitle a virtual machine to a fraction of available host physical memory, based on a proportional-share allocation policy. For example, a virtual machine with twice as many shares as another is generally entitled to consume twice as much memory, subject to its limit and reservation constraints.

Periodically, ESX computes a memory allocation target for each virtual machine based on its share-based entitlement, its estimated working set size, and its limit and reservation. Here, a virtual machine’s working set size is defined as the amount of guest physical memory that is actively being used. When host memory is undercommitted, a virtual machine’s memory allocation target is the virtual machine’s consumed host physical memory size with headroom

VMware Memory Explained

Great pic showing Memory calculations from VMware

Virtual Machine Overhead

VM’s host memory usage = VM’s guest memory size + VM’s overhead memory

Each VM running on an vSphere consumes some memory overhead additional to the current usage of its configured memory. This extra memory is needed by ESX for the internal datastructures like virtual machine frame buffer and mapping table for memory translation (mapping guest physical memory to the actual machine memory)

  • Virtual machine frame buffer

A framebuffer is a video output device that drives a video display from a memory buffer containing a complete frame of data.

  • Mapping table for memory translation  – Mapping guest physical memory to the actual machine memory)

The VMM is responsible for mapping guest physical memory to the actual machine memory, and it uses shadow page tables to accelerate the mappings. As depicted
by the red line in the diagram, the VMM uses TLB (translation lookaside buffer) hardware to map the virtual memory directly to the machine memory to avoid the two levels of translation on every access. When the guest OS changes the virtual memory to physical memory mapping, the VMM updates the shadow page tables to enable a direct lookup.

Static overhead

This is the minimum amount of memory needed to start/boot the VM. DRS and the VMkernel uses this metric for admission control and VMotion calculations. The destination host must be able to back the virtual machine reservation and the static overhead otherwise the VMotion will fail.

Dynamic overhead

When the VM is powered on, the virtual machine monitor (VMM) can request additional memory space. The VMM will request the space, but the VMkernel is not required to supply it. If the VMM does not obtain the extra memory space, the virtual machine will continue to function but this can lead to performance degradation. The VMkernel treats virtual machine overhead reservation the same as VM-level memory reservation and it will not reclaim this

Memory Overhead Table