Tag Archive for tuning

Tune ESXi VM Storage Configuration

tools

Tuning Configuration

  • Use the correct virtual hardware for the VM O/S
  • Use paravirtual hardware for I/O intensive applications
  • LSI Logic SAS for newer O/S’s
  • Size the Guest O/S Queue depth appropriately
  • Make sure Guest O/S partitions are aligned
  • Know what Disk provisioning policies are best. Thick provision lazy zeroed (default), Thick provision eager zeroed and Thin provision.
  • Store swap file on a fast or SSD Datastore

swapfile

  • When deploying a virtual machine, an administrator has a choice between three virtual disk modes. For optimal performance, independent persistent is the best choice. The virtual disk mode can be modified when the VM is powered off.

storage

  • Choose VMFS or RDM Disks to use. RDM Disk generally used by clustering software.

RDM

  • Use Disk Shares to configure more fined grained resource control

vmstroage

  • In some cases large I/O requests issued by applications can be split by the guest storage driver. Changing the VMs registry settings to issue larger block sizes can eliminate this splitting thus enhancing performance. See http://kb.vmware.com/kb/9645697

Tune ESXi VM Network Configuration

tools

Tuning Configuration

  • Use the VMXNet3 adapter and if it is not supported use the VMXNET/VMXNET2 adapter

nicsettings

  • Use a network adapter that supports TCP Checksum, TSO and Jumbo Frames multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery
  • Use the fastest ethernet you can. 10GB preferable
  • Ensure the speed and duplex settings on the network adapters is correct. For 10/100 nics, set the speed and duplex. Make sure the duplex is set to full duplex
  • For NICs, Gigabit Ethernet or higher set the speed and duplex to auto-negotiate
  • DirectPath I/O (DPIO) provides a means of bypassing the vmkernel, giving a VM direct access to hardware devices by leveraging Intel VT-D and AMD-V hardware support. Specific to networking, DPIO allows a VM to connect directly to the hosts physical network adapter without the overhead associated with emulation or paravirtualization. The bandwidth increases associated with DPIO are nominal but the savings on CPU cycles can be substantial for busy workloads. There are quite a few restrictions when utilizing DPIO. For example, unless using Cisco UCS hardware, DPIO is not compatible with hot-add, FT, HA, DRS or snapshots.
  • Use NIC Teaming where possible. VMware’s proprietry network teaming or Etherchannel
  • Virtual Machine Communications Interface (VMCI) is a virtual device that promotes enhanced communication between a virtual machine and the host on which it resides, and between VMs running on the same host. VMCI provides a high-speed alternative to standard TCP/IP sockets. The VMCI SDK enables engineers to develop applications which take advantage of the VMCI infrastructure. With VMCI, VM application traffic (of VMs on the same host) bypasses the network layer, reducing communication overhead. With VMCI, it’s not uncommon for inter-VM traffic to exceed 10 GB/s

Tune ESXi VM CPU Configuration

tools

Tuning Configuration

  • Configuring Multicore Virtual CPUs. There are some limitations and considerations on this subject, like ESXi host configuration, VMware License, Guest OS (license) restrictions and so on. Only then can you decide the number of virtual sockets and the number of cores per socket.
  • CPU affinity is a technique that doesn’t necessarily imply load balancing, but it can be used to restrict a virtual machine to a particular set of processors. Affinity may not apply after a vMotion and it can disrupt ESXi’s ability to apply and meet shares and reservations
  • Duncan Epping raises some good points in this link http://www.yellow-bricks.com/2009/04/28/cpu-affinity/

affinity

  • You can use Hot Add to add vCPUs on the fly

HotAdd

  • Check Hyperthreading is enabled

Advanced CPU

  • Generally keep CPU/MMU Virtualisation on Automatic

CPPU_MMU

  • You can adjust Limits, Reservations and Shares to control CPU Resources

CPUSHARES

Tune ESXi VM Memory Configuration

tools

Tuning Configuration

  • Minimum memory size is 4MB for virtual machines that use BIOS firmware. Virtual machines that use EFI firmware require at least 96MB of RAM or they cannot power on.
  • The memory size must be a multiple of 4MB
  • vNUMA exposes NUMA technology to the Guest O/S. Hosts must have matching NUMA architecture and VMs must be running Hardware Version 8

numa

  • Size the VM so they align with physical boundaries. If you have a system with 6 cores per NUMA node then size your machines with a multiple of 6 vCPUs
  • vNUMA can be enabled on smaller machines by adding numa.vcpu.maxPerVirtualNode=X (Where X is the number of vCPUs per vNUMA node)
  • Enable Memory Hot Add to be able to add memory to the VMs on the fly

HotAdd

  • Use Operating Systems that support large memory pages as ESXi will by default provide them to those O/S’s which request them
  • Store a VMs swap file in a different faster location to the working directory
  • Configure a special host cache on an SSD (If one is installed) to be used for the swap to host cache feature. Host cache is new in vSphere 5. If you have a datastore that lives on a SSD, you can designate space on that datastore as host cache. Host cache acts as a cache for all virtual machines on that particular host as a write-back storage for virtual machine swap files. What this means is that pages that need to be swapped to disk will swap to host cache first, and the written back to the particular swap file for that virtual machine
  • Keep Virtual Machine Swap files on low latency, high bandwidth storage systems
  • Do not store swap files on thin provisioned LUNs. This can cause swap file growth to fail.

swapfile

  • You can use Limits, Reservations and Shares to control Resources per VM

Memoryres

Tune ESXi host Storage Configuration

tools

Tuning Configurations

  • Always use the Vendors recommendations whether it be EMC, NetApp or HP etc
  • Document all configurations
  • In a well-planned virtual infrastructure implementation, a descriptive naming convention aids in identification and mapping through the multiple layers of virtualization from storage to the virtual machines. A simple and efficient naming convention also facilitates configuration of replication and disaster recovery processes.
  • Make sure your SAN fabric is redundant (Multi Path I/O)
  • Separate networks for storage array management and storage I/O. This concept applies to all storage protocols but is very pertinent to Ethernet-based deployments (NFS, iSCSI, FCoE). The separation can be physical (subnets) or logical (VLANs), but must exist.
  • If leveraging an IP-based storage protocol I/O (NFS or iSCSI), you might require more than a single IP address for the storage target. The determination is based on the capabilities of your networking hardware.
  • With IP-based storage protocols (NFS and iSCSI) you channel multiple Ethernet ports together. NetApp refers to this function as a VIF. It is recommended that you create LACP VIFs over multimode VIFs whenever possible.
  • Use CAT 6 cabling rather than CAT 5
  • Enable Flow-Control (should be set to receive on switches and
    transmit on iSCSI targets)
  • Enable spanning tree protocol with either RSTP or portfast
    enabled. Spanning Tree Protocol (STP) is a network protocol that makes sure of a loop-free topology for any bridged LAN
  • Configure jumbo frames end-to-end. 9000 rather than 1500 MTU
  • Ensure Ethernet switches have the proper amount of port
    buffers and other internals to support iSCSI and NFS traffic
    optimally
  • Use Link Aggregation for NFS
  • Maximum of 2 TCP sessions per Datastore for NFS (1 Control Session and 1 Data Session)
  • Ensure that each HBA is zoned correctly to both SPs if using FC
  • Create RAID LUNs according to the Applications vendors recommendation
  • Use Tiered storage to separate High Performance VMs from Lower performing VMs
  • Choose Virtual Disk formats as required. Eager Zeroed, Thick and Thin etc
  • Choose RDMs or VMFS formatted Datastores dependent on supportability and Aplication vendor and virtualisation vendor recommendation
  • Utilise VAAI (vStorage APIs for Array Integration) Supported by vSphere 5
  • No more than 15 VMs per Datastore
  • Extents are not generally recommended
  • Use De-duplication if you have the option. This will manage storage and maintain one copy of a file on the system
  • Choose the fastest storage ethernet or FC adaptor (Dependent on cost/budget etc)
  • Enable Storage I/O Control
  • VMware highly recommend that customers implement “single-initiator, multiple storage target” zones. This design offers an ideal balance of simplicity and availability with FC and FCoE deployments.
  • Whenever possible, it is recommended that you configure storage networks as a single network that does not route. This model helps to make sure of performance and provides a layer of data security.
  • Each VM creates a swap or pagefile that is typically 1.5 to 2 times the size of the amount of memory configured for each VM. Because this data is transient in nature, we can save a fair amount of storage and/or bandwidth capacity by removing this data from the datastore, which contains the production data. In order to accomplish this design, the VM’s swap or pagefile must be relocated to a second virtual disk stored in a separate datastore
  • It is the recommendation of NetApp, VMware, other storage vendors, and VMware partners that the partitions of VMs and the partitions of VMFS datastores are to be aligned to the blocks of the underlying storage array. You can find more information around VMFS and GOS file system alignment in the following documents from various vendors
  • Failure to align the file systems results in a significant increase in storage array I/O in order to meet the I/O requirements of the hosted VMs
  • Try using sDRS
  • Turn on Storage I/O Control (SIOC) to split up disk shares globally across all hosts accessing that datastore
  • Make sure your multipathing is correct. Active/Active arrays use Fixed, Active/Passive use Most Recently used and then you have ALUA
  • Change queue depths to 64 rather than the default 32 if required. Set the parameter Disk.SchedNumReqOutstanding to 64 in vCenter
  • VMFS and RDM are both good for Random Reads/Writes
  • VMFS and RDM are also good for sequential Reads/Writes of small I/O block sizes
  • VMFS best for sequential Reads/Writes at larger I/O block sizes

Tune ESXi host CPU Configuration

tools

Tuning Configurations

  • Deploy single-threaded applications on uniprocessor virtual machines, instead of on SMP virtual machines, for the best performance and resource use.
  • VMware advise against using CPU Affinity as it generally constrains the scheduler BD can cause an improperly balanced load
  • Use DRS where you can as this will balance the load for you
  • Don’t configure your VMs for more vCPUs then their workloads require. Configuring a VM with more vCPUs then it needs will cause additional, unnecessary CPU utilization due to the increased overhead relating to multiple vCPUs
  • Enable Hyperthreading in the BIOS. Check this in the BIOS and in vCenter, click on the host, select Configuration, select Properties and check that Hyperthreading is enabled
  • When dealing with NUMA systems, ensure that node interleaving is disabled in the BIOS. If node interleaving is set to enabled it essentially disables NUMA capability on that host
  • When possible configure the number of vCPUs to equal or less than the number of physical cores on a single NUMA node. When you configure equal or less vCPUs:physical cores the VM will get all its memory from that single NUMA node, resulting in lower memory access and latency times
  • Sometimes for certain machines it may be beneficial to schedule all of the vCPUs on the same sockets in under committed systems which gives that VM full access to a Shared Last Level cache rather than spread across multiple processors. Adjust sched.cpu.vsmpConsolidate=”true” in the VMX Configuration file
  • Pay attention to the Manufacturers recommendations for resources, especially application multithreading support
  • Use processors which support Hardware-Assisted CPU Virtualization (Intel VT-x and AMD AMD-V)
  • Use processors which support Hardware-Assisted MMU Virtualization (Intel EPT and AMD RVI)
  • When configuring virtual machines, the total CPU resources needed by the virtual machines running on the system should not exceed the CPU capacity of the host. If the host CPU capacity is overloaded, the performance of individual virtual machines may degrade

Tune ESXi host networking configuration

tools

Tuning Configurations

  • Use Network I/O Control to utilise Limits, Shares and Qos priority tags for traffic
  • Team NICs across PCI cards and switches for complete redundancy
  • Using vDS switches gives you more features than the Standard Switch and minimises configuration time
  • Utilise NIC Teaming where possible to provide failover and extra bandwidth
  • Use Jumbo Frames where you can – MTU 9000 rather than 1500. Must be set the same end to end
  • Keep physical NIC firmware updated
  • Use VMXNET-3 Virtual Network Adapters where possible. Must be supported by the O/S you are running. Shares a ring buffer between the VM and VMKernel and uses zero copy which saves CPU cycles. Takes advantage of transmission packet coalescing to reduce address space switching
  • DirectPath I/O may provide you a bump in network performance, but you really need to look at the use case. You can lose a lot of core functionality when using this feature, such as vMotion and FT (some special exceptions when running on UCS for vMotion) so you really need to look at the cost:benefit ratio and determine if it’s worth the tradeoffs
  • Enable Discovery Protocols CDP and LLDP for extra information on your networks
  • Make sure your NIC teaming policies on your Virtual switches match the correct policies on the physical switches
  • Make sure your physical switches support cross stack etherchannel if you are planning on using this in a fully redundant networking solution
  • Use static or ephemeral port bindings due to the deprecation of Dynamic Binding
  • Choose 10GB ethernet over 1GB. This gives you Netqueue, a feature which use multiple transmit and receive queues to allow I/O processing across multiple CPUs
  • Choose physical NICs with TCP Checksum Offload which reduces the load on the physical CPU by allowing the NIC to perform checksum operations on network packets
  • Choose physical adapters with TCP Segmentation offload as this can reduce the CPU Overhead involved with sending large amounts of TCP traffic.
  • To speed up packet handling, network adapters can be configured for direct memory access to high memory/ This bypasses the CPU and allows the NIC direct access to memory
  • You can use DirectPath which allows a VM to directly access the physical NIC instead of using an emulated or paravirtual device however it is not compatible with certain features such as vMotion, Hot Add/Hot Remove, HA, DRS and Snapshots
  • Use Split RX Mode on VMXNet-3 adapters is an ESXi feature that uses multiple physical CPUs to process network packets received in a single work queue. It is individually configured on each NIC. Good for Stock Exchanges and Multimedia companies
  • Use VMCI if you have 2 VMs on the same host which require a high-speed communication channel which bypasses the guest or VMKernel networking stack
  • In a native environment, CPU utilization plays a significant role in network throughput. To process higher levels of throughput, more CPU resources are needed. The effect of CPU resource availability on the network throughput of virtualized applications is even more significant. Because insufficient CPU resources will limit maximum throughput, it is important to monitor the CPU utilization of high-throughput workloads.

Identify appropriate BIOS and firmware setting requirements for optimal host performance

sw-update1-jpg

Appropriate BIOS and firmware settings

  • Make sure you have the most up to date firmware for your Servers including all 3rd party cards
  • Enable Hyperthreading. Note, you cannot enable hyperthreading on a system with great then 32 physical cores because of the logical limit of 64 CPUs
  • Make sure the BIOS is set to enable all populated processor sockets and to enable all cores in each socket.
  • Enable “Turbo Boost” in the BIOS if your processors support it
  • Some NUMA-capable systems provide an option in the BIOS to disable NUMA by enabling node interleaving. In most cases you will get the best performance by disabling node interleaving (in other words, leaving NUMA enabled) These technologies automatically trap sensitive calls, eliminating the overhead required to
    do so in software. This allows the use of a hardware virtualization (HV) virtual machine monitor (VMM) as opposed to a binary translation (BT) VMM.
  • Hardware-Assisted CPU Virtualization (Intel VT-x and AMD AMD-V) The first generation of hardware virtualization assistance, VT-x from Intel and AMD-V from AMD, became available in 2006. These technologies automatically trap sensitive calls, eliminating the overhead required to do so in software. This allows the use of a hardware virtualization (HV) virtual machine monitor (VMM) as opposed to a binary translation (BT) VMM.
  • Hardware-Assisted MMU Virtualization (Intel EPT and AMD RVI) Some recent processors also include a new feature that addresses the overheads due to memory management unit (MMU) virtualization by providing hardware support to virtualize the MMU. ESX 4.0 supports this feature in both AMD processors, where it is called rapid virtualization indexing (RVI) or nested page tables (NPT), and in Intel processors, where it is called extended page tables (EPT).
  • Cache prefetching mechanisms (sometimes called DPL Prefetch, Hardware Prefetcher, L2 Streaming Prefetch, or Adjacent Cache Line Prefetch) usually help performance, especially when memory access patterns are regular. When running applications that access memory randomly, however, disabling these mechanisms might result in improved performance.
  • ESX 4.0 supports Enhanced Intel SpeedStep® and Enhanced AMD PowerNow!™ CPU power management technologies that can save power when a host is not fully utilized. However because these and other power-saving technologies can reduce performance in some situations, you should consider disabling them when performance considerations outweigh power considerations.
  • Disable C1E halt state in the BIOS.
  • Disable any other power-saving mode in the BIOS.
  • Disable any unneeded devices from the BIOS, such as serial and USB ports.

Identify appropriate driver revisions required for optimal ESXi host performance

Check out the VMware HCL and (or) VMware KB 2030818 for recommended drivers and firmware for different vSphere versions.