Archive for Certification

What’s new in vSphere 5.5

info_new_icon

What’s new in vSphere 5.5

I needed to go through these after realising my certifications were due to expire. I was able to take the VCP5-DCV Delta Exam (VCP550D) exam with Pearson Vue which is a little cheaper for those of you with existing qualifications. See the VMware site and log into your myvmware account to check the status of your existing qualifications.

A brief summary with key points on all features

  • Hot-Pluggable PCIe SSD Devices

PCIe stands for Peripheral Component Interconnect Express. These high performance Solid State Drives can be used for local storage in ESXi. The ability to hot add has always been there with SAS and SATA and is now expanded to support SSDs. Being able to provide this functionality is of great benefit to Administrators in reducing downtime to a host system in the event of a disk failure or even to be able to add an SSD drive. PCIe is a serial based technology which means information can be sent over the bus in 2 directions at once. Each lane in PCIe can transmit in both directions at the same time. Standard PCI is parallel based technology which means all data goes in one directions around a loop. Bandwidth is not shared the same way in PCIe opposed to PCI so there is less bus congestion.

Reliable Memory Technology

This is a CPU Hardware feature that ESXi can use to place the VMkernel on a region of memory which is reported as being more reliable. ESXi runs directly in memory therefore protecting it and reducing the risk of memory errors whilst increasing resiliency will provide this protection. Hostd, Initial Thread and Watchdog are also protected. The vmware-hostd management service is the main communication channel between ESX/ESXi hosts and VMkernel. The vmware-watchdog process watches over hostd and restarts it if it detects that hostd is no longer running.

Enhancements to CPU C-States

In vSphere 5.1 and earlier, the balanced policy for host power management leveraged only the performance state (P-state), which kept the processor running at a lower frequency and voltage. In vSphere 5.5, the deep processor power state (C-state) also is used, providing additional power savings. Another potential benefit of reduced power consumption is with inherent increased performance, because turbo mode frequencies on Intel chipsets can be reached more quickly while other CPU cores in the physical package are in deep C-states.

vsphere5.5

Virtual Machine Compatibility with VMware ESXi 5.5

  • LSI SAS support for Oracle Solaris 11 OS
  • Enablement for new CPU architectures
  • New advanced host controller interface (AHCI) This new virtual-SATA controller supports both virtual disks and CD-ROM devices that can
    connect up to 30 devices per controller, with a total of four controllers
  • Hardware-accelerated 3D graphics – (vSGA) Virtual shared graphics acceleration (vSGA) support inside of a virtual machine. The existing support was limited to only NVIDIA-based GPUs. With vSphere 5.5, vSGA support has been expanded to include both NVIDIA- and AMD-based GPUs

vsphere5.5vsga

  • There are three supported rendering modes for a virtual machine configured with a vSGA: automatic, hardware and software accessed by editing the settings of a VM

Graphics Acceleration for Linux Guests

VMware is the first to develop a new guest driver that accelerates the entire Linux graphics stack for modern Linux distributions. This means that any modern GNU/Linux distribution can package the VMware guest driver and provide out-of-the-box support for accelerated graphics without any additional tools or package installation

vCenter Single Sign-On

The following vCenter Single Sign-On enhancements have been made.

  • Simplified deployment – A single installation model for customers of all sizes is now offered.
  • Enhanced Microsoft Active Directory integration – The addition of native Active Directory support enables cross-domain authentication with one and two-way trusts common in multidomain environments.
  • Built from the ground up, this architecture removes the requirement of a database and now delivers a multimaster authentication solution with built-in replication and support for multiple tenants.

vSphere Web Client

  • Full client support for Mac OS X is now available in the
    vSphere Web Client.
  • Administrators now can drag and drop objects from the center panel onto the vSphere inventory, enabling them to quickly perform bulk actions
  • Administrators can now select properties on a list of displayed objects and selected filters to meet specific search criteria
  • Recent Items. Similar to what you find on Windows desktops , this feature allows you to go back to recently accessed objects

vCenter Server Appliance

The previous embedded database had certain limitations which caused it’s adoption to be less widely taken up. The vCenter Server Appliance addresses this with a re-engineered, embedded vPostgres database that can now support as many as 100 vSphere hosts or 3,000 virtual machines (with appropriate sizing)

vSphere App HA

In earlier versions App HA used virtual machine monitoring, which checks for
the presence of “heartbeats” from VMware Tools as well as I/O activity from the virtual machine. In vSphere 5.5, VMware has introduced vSphere App HA. This new feature works in conjunction with vSphere HA host monitoring and virtual
machine monitoring to further improve application uptime. vSphere App HA can be configured to restart an application service when an issue is detected. It is possible to protect several commonly used, off-the-shelf applications. vSphere HA can also reset the virtual machine if the application fails to restart.

vSphere App HA

vSphere App HA uses VMware vFabric Hyperic to monitor applications. VMware vFabric Hyperic is an agent-based monitoring system that automatically collects metrics on the performance and availability of hardware resources, operating systems, middleware and applications in physical, virtualized and cloud environments. It requires the provisioning of 2 appliances

  • vSphere App HA virtual appliance stores and manages vSphere App HA policies.
  • vFabric Hyperic monitors applications and enforces vSphere App HA policies
  • Hyperic agents then need to be installed in the virtual machines containing applications that will be protected by vSphere App HA
  • Includes policies to manage timings and resetting options

vspherehypric

vSphere HA Compatibility with DRS Anti-Affinity Rules

vSphere HA will now obey DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now keep to those rules when restarting virtual machines following a host failure

vSphere Data Protection

  • Direct-to-host emergency restore: vSphere Data Protection can be used to restore a virtual machine directly to a vSphere host without the need for vCenter Server and vSphere Web Client. This is especially helpful when using vSphere Data Protection to protect vCenter Server.
  • Backup and restore of individual virtual machine hard disks (.vmdk files): Individual .vmdk files can be selected for backup and restore operations.
  • Replication to EMC Avamar: vSphere Data Protection replicates backup data to EMC Avamar to provide offsite backup data storage for disaster recovery.
  • Flexible storage placement: When deploying vSphere Data Protection, separate datastores can be selected for the OS partition and backup data partition of the virtual appliance.
  • Mounting of existing backup data storage to new appliance: An existing vSphere Data Protection backup data partition can be mounted to a new vSphere Data Protection virtual appliance during deployment.
  • Scheduling granularity: Backup and replication jobs can be scheduled at specific times; for example. Backup Job 1 at 8:45 p.m., Backup Job 2 at 11:30 p.m., and Replication Job 1 at 2:15 a.m.

vSphere Big Data Extensions (BDE)

BDE is a new addition in vSphere 5.5 for VMware vSphere Enterprise Edition
and VMware vSphere Enterprise Plus Edition. BDE is a tool that enables administrators to deploy and manage Hadoop clusters on vSphere. BDE is
based on technology from Project Serengeti, the VMware open-source virtual Hadoop management tool.

  • Creates, deletes, starts, stops and resizes clusters
  • Controls resource usage of Hadoop clusters
  • Specifies physical server topology information
  • Manages the Hadoop distributions available to BDE users
  • Automatically scales clusters based on available resources and in response to other workloads on the vSphere cluster
  • Hadoop clusters can be protected easily using vSphere HA and VMware vSphere Fault Tolerance

hadoop

Support for 62TB VMDK

The previous limit was 2TB—512 bytes. The new limit is 62TB. The maximum size of a virtual Raw Device Mapping (RDM) is also increasing, from 2TB—512 bytes to 62TB. Virtual machine snapshots also support this new size for delta
disks that are created when a snapshot is taken of the virtual machine.

Microsoft Cluster Service (MSCS)

  • Microsoft Windows 2012
  • Round-robin path policy for shared storage. changes were made concerning the SCSI locking mechanism used by MSCS when a failover of services occurs. New path policy, changes have been implemented that make it irrelevant which path is used to place the SCSI reservation; any path can free the reservation.
  • iSCSI protocol for shared storage
  • Fibre Channel over Ethernet (FCoE) protocol for shared storage

16GB E2E FC Support

In vSphere 5.5, VMware introduces 16Gb end-to-end FC support. Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.

PDL AutoRemove

Permanent device loss (PDL) is a situation which occurs when a disk device either fails or is removed from the vSphere host in an uncontrolled way. PDL detects if a disk device has been permanently removed that is, the device will not return based on SCSI sense codes. When the device enters this PDL state, the vSphere host can take action to prevent directing any further, unnecessary I/O to this device. This alleviates other conditions that might arise on the host as a result of this unnecessary I/O. The PDL feature automatically removes a device from a host when it enters a PDL state. Because vSphere hosts have a limit of 255 disk devices per host, a device that is in a PDL state can no longer accept I/O but can still occupy one of the available disk device spaces. Therefore, it is better to remove the device from the host.
PDL AutoRemove occurs only if there are no open handles left on the device. The auto-remove takes place when the last handle on the device closes. If the device recovers, or if it is re-added after having been inadvertently removed, it will be treated as a new device.

vSphere Replication

At the primary site, migrations now move the persistent state files that contain pointers to the changed blocks along with the VMDKs in the virtual machine’s home directory, thereby removing the need for a full synchronization. This means that replicated virtual machines can now be moved between datastores, by vSphere Storage vMotion or vSphere Storage DRS, without incurring a penalty on the replication. The retention of the .psf means that the virtual machine can be brought to the new datastore or directory while retaining its current replication data and can continue with the procedure and with the “fast suspend/resume” operation of moving an individual VMDK.

replication

A new feature is introduced in vSphere 5.5 that enables retention of historical points in time. The old redo logs are not discarded; instead, they are retained and cleaned up on a schedule according to the MPIT retention policy.

VAAI UNMAP Improvements

vSphere 5.5 introduces a new and simpler VAAI UNMAP/Reclaim command:

  • esxcli storage vmfs unmap
  • The ability to specify the reclaim size in blocks rather than as a percentage value; dead space can now be reclaimed in increments rather than all at once

VMFS Heap Improvements

In vSphere 5.5, VMware introduces a much improved heap eviction process, so there is no need for the larger heap size, which consumes memory. vSphere 5.5, with a maximum of 256MB of heap, enables vSphere hosts to access all address space of a 64TB VMFS

vSphere Flash Read Cache

vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a single consumable vSphere construct called vSphere Flash Resource, which is consumed and managed in the same way as CPU and memory are done today in vSphere.
The vSphere Flash Read Cache infrastructure is responsible for integrating the vSphere hosts’ locally attached Flash-based devices into the vSphere storage stack. This integration delivers a Flash management platform that enables the pooling of Flash-based devices into a vSphere Flash Resource.

Flashcache

Link Aggregation Protocol Enhancements

  • Comprehensive load-balancing algorithm support – 22 new hashing algorithm options are available. For example, source and destination IP address and VLAN field can be used as the input for the hashing algorithm.
  • Support for multiple link aggregation groups (LAGs) – 64 LAGs per host and 64 LAGs per VMware vSphere VDS.
  • Because LACP configuration is applied per host, this can be very time consuming for large deployments. In this release, new workflows to configure LACP across a large number of hosts are made available through templates.

Traffic Filtering enhancements

The vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).

Quality of Service Tagging

Two types of Quality of Service (QoS) marking/tagging common in networking are 802.1p Class of Service

  • (CoS) Class of Service applied on Ethernet/layer 2 packets
  • (DSCP) Differentiated Service Code Point, applied on IP packets. In vSphere 5.5, the DSCP marking support enables users to insert tags in the IP header. IP header–level tagging helps in layer 3 environments, where physical routers function better with an IP header tag than with an Ethernet header tag.

SR-IOV Enhancements

Single-root I/O virtualization (SR-IOV) is a standard that enables one PCI Express (PCIe) adapter to be presented as multiple, separate logical devices to virtual machines.

  • A new capability is introduced that enables users to communicate the port group properties defined on the vSphere standard switch (VSS) or VDS to the virtual functions. The new control path through VSS and VDS communicates the port group–specific properties to the virtual functions. For example, if promiscuous mode is enabled in a port group, that configuration is then passed to virtual functions, and the virtual machines connected to the port group will receive traffic from other virtual machines.

Enhanced Host Level Performance

  • An enhanced host-level packet capture tool is introduced. The packet capture tool is equivalent to the command-line tcpdump tool available on the Linux platform.
  • This tool is part of the vSphere platform and can be accessed through the vSphere host cmd prompt
  • Can capture dropped packets
  • Can trace the path of a packet with time stamp details
  • Can capture traffic on VSS and VDS
  • Captures packets at the following levels
    ––Uplink
    ––Virtual switch port
    ––vNIC

40Gb NIC Support

vSphere 5.5 provides support for 40Gb NICs.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.

Maximums

  • 320 physical CPUs
  • 4TB Memory
  • 16 Numa nodes
  • 4906 vCPUs per ESXi host

VCAP-DCA 5 Exam Experience

exam

Here is my overview of how my VCAP-DCA 5 exam went. Hopefully I can give you some helpful pointers bulleted below

Quick Overview

  • 26 Live labs consisting of several tasks to complete
  • 3.5 Hours to complete the exam
  • A short survey pre exam on your VMware skills
  • You have some VMware Documentation to assist you

Things to remember

  • You need to be quick. I found 3.5 Hours is very tight for time giving you roughly 8 minutes per question
  • Try and memorize the Admin password, it will save you from flicking between screens
  • Use the icons to connect to servers, don’t RD into vCenter then use the client.
  • If you can’t answer something, don’t waste time, move on to the next question
  • If you have set something to run which is taking some time, don’t hang around, move on to the next task or question if you can
  • I found sometimes the lab wasn’t the fastest when changing between screens but I understand there isn’t a lot anyone can do about this and is probably down to how fast the testing Center connection is
  • You can only go backwards and forwards so keep a note of what questions you want to go back to on your pad you are given by the test center
  • You have got documentation but you don’t have much time to go raking through this unless you absolutely know where something is
  • It is almost vital that you build your own lab to test out all the Blueprint points
  • Read as much documentation as you can
  • Read other peoples blogs and experiences
  • The Trainsignal videos are really useful as preparation for this exam
  • Read the questions and make sure you haven’t missed any of the tasks which they are asking you to do or not do!
  • The VMware Optimize and Scale class is also very useful but expensive
  • I found this to be one of the best exams I have taken. It was a real world exam and far more useful than simply answering multiple choice
  • Don’t panic, just imagine you are at your desk at work
  • Understand basic PowerShell
  • If you have any issues, report them quickly so the test center can contact VMware and let them know. I had one issue with my test and the lady was very quick in assisting me and letting me get back to the exam as quickly as possible

Microsoft Qualification Pathways

exam

This may prove helpful to those of you who are undertaking qualifications with Microsoft or upgrading qualifications.

Pathways

  • Client
  • Server
  • Database
  • Developer

exams

Using VMware PowerCLI to manage VMware vSphere Update Manager Tasks

index

Requirements

  • PowerCLI 4.1 or higher
  • Update Manager PowerCLI Plugin
  • .NET 2.0 SP1
  • Windows PowerShell 2.0/3.0

Procedure

Install Update Manager PowerCLI

  1. Download the Update Manager PowerCLI plugin (You will need to login)
  2. https://my.vmware.com/group/vmware/get-download?downloadGroup=VUM51PCLI
  3. Navigate to the directory containing the Update Manager PowerCLI installation files.
  4. Run VMware-UpdateManager-Pscli-5.0.0-432001. Note that the version may be different for your installation.
  5. If prompted with a User Access Control warning, click Yes.
  6. On the Welcome screen, click Next.
  7. Accept the License Agreement, click Next.
  8. Click Install.
  9. Click Finish once the installation completes.
  10. Open the vSphere PowerCLI console from the Windows Start menu or by clicking the vSphere PowerCLI shortcut icon.
  11. Type Connect-VIServer
  12. Ignore the yellow certificate warnings or you can type the command to ignore them
  13. Type Get-Command -PSSnapin VMware.VumAutomation to get all the commands associated with this pssnapin

powercli

To create Patch Baselines

updatemanager

Attaching and Detaching Baselines

2

Scanning a Virtual Machine

3

To verify whether a virtual machine has at least one baseline with Unknown compliance status attached to it and start a scan

4

Staging Patches

Staging can be performed only for hosts, clusters, and datacenters.

5

Remediating Inventory Objects

You can remediate virtual machines, virtual appliances, clusters, and hosts.

6

Downloading Patches and Scanning Objects

7

VMware Link

http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-update-manager-powercli-50-inst-admg.pdf

Understand appropriate use cases for CPU affinity

magnet

What is CPU Affinity?

By specifying a CPU affinity setting for each virtual machine, you can restrict the assignment of virtual machines to a subset of the available processors in multiprocessor systems. By using this feature, you can assign each virtual machine to processors in the specified affinity set.

CPU affinity specifies virtual machine-to-processor placement constraints and is different from the relationship created by a VM-VM or VM-Host affinity rule, which specifies virtual machine-to-virtual machine host placement constraints.
In this context, the term CPU refers to a logical processor on a hyperthreaded system and refers to a core on a non-hyperthreaded system.

The CPU affinity setting for a virtual machine applies to all of the virtual CPUs associated with the virtual machine and to all other threads (also known as worlds) associated with the virtual machine. Such virtual machine threads perform processing required for emulating mouse, keyboard, screen, CD-ROM, and miscellaneous legacy devices.

By setting a CPU affinity on the virtual machine you are limiting the available CPUs on which the virtual machine can run. It does not dedicate that CPU to that virtual machine and therefore does not restrict the CPU scheduler from using that CPU for other virtual machines

Problems with CPU Affinity

In some cases, such as display-intensive workloads, significant communication might occur between the virtual CPUs and these other virtual machine threads. Performance might degrade if the virtual machine’s affinity setting prevents these additional threads from being scheduled concurrently with the virtual machine’s virtual CPUs. Examples of this include a uniprocessor virtual machine with affinity to a single CPU or a two-way SMP virtual machine with affinity to only two CPUs.

Consider your resource management needs before you enable CPU affinity on hosts using hyperthreading. For example, if you bind a high priority virtual machine to CPU 0 and another high priority virtual machine to CPU 1, the two virtual machines have to share the same physical core. In this case, it can be impossible to meet the resource demands of these virtual machines. Ensure that any custom affinity settings make sense for a hyperthreaded system

For the best performance

When you use manual affinity settings, VMware recommends that you include at
least one additional physical CPU in the affinity setting to allow at least one of the virtual machine’s threads to be scheduled at the same time as its virtual CPUs. Examples of this include

  • A uniprocessor virtual machine with affinity to at least two CPUs
  • A two-way SMP virtual machine with affinity to at least three CPUs

Assign a Virtual Machine to a Specific Processor
Using CPU affinity, you can assign a virtual machine to a specific processor. This allows you to restrict the assignment of virtual machines to a specific available processor in multiprocessor systems.

Procedure

  • In the vSphere Client inventory panel, select a virtual machine and select Edit Settings.
  • Select the Resources tab and select Advanced CPU
  • Click the Run on processor(s) button
  • Select the processors where you want the virtual machine to run and click OK
  • If you cannot see this option, it is because the host is in a DRS Cluster, the CPU affinity “Run on processor” feature  is not availble as its the DRS that manages ressources!

affinity

Use cases for CPU Affinity

  • Cisco’s Unity

Cisco Unity messaging is a real-time application, which makes it more difficult to virtualize than traditional data-centric applications, such as database and email servers. (For example, to support 144 concurrent voice sessions, Cisco Unity messaging must place 7,200 packets on the wire at a precise 20 ms interval.) Delivering this level of performance in a reliable, predictable, and serviceable manner requires some concessions, primarily surrounding CPU Affinity

Read this article on CPU Affinity

http://frankdenneman.nl/2011/01/11/beating-a-dead-horse-using-cpu-affinity/

Configure an Auto Deploy Reference Host

Cloud

Introduction

In an environment where no state is stored on the host, a reference host helps you set up multiple hosts with the same configuration. You configure the reference host with the logging, coredump, and other settings that you want, save the host profile, and write a rule that applies the host profile to other hosts as needed.

You can configure the storage, networking, and security settings on the reference host and set up services such as syslog and NTP. The exact setup of your reference host depends on your environment, but you might consider the following customization.

custom

Auto Deploy Reference Host Setup

custom

Configuring an Auto Deploy Reference Host

  • vSphere Client

The vSphere Client supports setup of networking, storage, security, and most other aspects of an ESXi host. You can completely set up your environment and export the host profile for use by Auto Deploy.

  • vSphere Command Line Interface

You can use vCLI commands for setup of many aspects of your host. vCLI is especially suitable for configuring some of the services in the vSphere environment. Commands include vicfg-ntp (set up an NTP server), esxcli system syslog (set up a syslog server), and vicfg-route (set up the default route).

  • Host Profile Interface

You can either set up a host with vSphere Client or vCLI and save the host profile for that host, or you can configure the host profiles directly with the Host Profiles interface in the vSphere Client

Provision/Reprovision ESXi Hosts using AutoDeploy

index

Provisioning and Reprovisioning

Provisioning a host that has never been provisioned with Auto Deploy (first boot) differs from subsequent boot processes. You must prepare the host, define the image using the Image Builder PowerCLI, and fulfill all other prerequisites before you can provision the host.

vSphere Auto Deploy supports multiple reprovisioning options. You can perform a simple reboot or reprovision with a different image or a different host profile.

Provisioning for the first time

Capture2

Subsequent boot of an AutoDeployed ESXi Host

depot

Reprovisoning

vSphere Auto Deploy supports multiple reprovisioning options. You can perform a simple reboot or reprovision with a different image or a different host profile.

The following reprovisioning operations are available.

  • Simple reboot.
  • Reboot of hosts for which the user answered questions during the boot operation.
  • Reprovision with a different image profile.
  • Reprovision with a different host profile.

Test and Repair Rule Compliance

  • When you add a rule to the Auto Deploy rule set or make changes to one or more rules, unprovisioned hosts that you boot are automatically provisioned according to the new rules. For all other hosts, Auto Deploy applies the new rules only when you test their rule compliance and perform remediation.
    This task assumes that your infrastructure includes one or more ESXi hosts provisioned with Auto Deploy, and that the host on which you installed VMware PowerCLI can access those ESXi hosts.

Prerequisites

  • Install VMware PowerCLI and all prerequisite software.
  • If you encounter problems running PowerCLI cmdlets, consider changing the execution policy.

Procedure changing the host profile used in the rule

  • Check which Auto Deploy rules are currently available. The system returns the rules and the associated items and patterns
  • Get-DeployRule
  • Make a change to one of the available rules, for example, you might change the image profile and the name of the rule. You cannot edit a rule already added to a rule set. Instead, you copy the rule and replace the item you want to change.
  • Copy-DeployRule -DeployRule testruleimageprofile -ReplaceItem DACVESX002_Host_Profile
  • Verify that the host that you want to test rule set compliance for is accessible.
    Get-VMHost -Name 10.1.1.100
  • Test the rule set compliance for that host and bind the return value to a variable for later use.
  • $tr = Test-DeployRuleSetCompliance 10.1.1.100
  • Examine the differences between what is in the rule set and what the host is currently using $tr.itemlist The system returns a table of current and expected items.
  • Remediate the host to use the revised rule set the next time you boot the host.
  • Repair-DeployRuleSetCompliance $tr

deployrulescompliance

What to do next

If the rule you changed specified the inventory location, the change takes effect immediately. For all other changes, boot your host to have Auto Deploy apply the new rule and to achieve compliance between the rule set and the host.

Please see Pages 81-85 of the vSphere Installation and Setup Guide

Configure Bulk Licensing

images

You can use the vSphere Client or ESXi Shell to specify individual license keys, or you can set up bulk licensing by using PowerCLI cmdlets. Bulk licensing works for all ESXi hosts, but is especially useful for hosts provisioned with Auto Deploy.

Assigning license keys through the vSphere Client or assigning licensing by using PowerCLI cmdlets functions differently as shown in the table below

license

Procedure

lic

Demo

  • Connect to vCenter and create the following 2 variables
  • $licenseDataManager = Get-LicenseDataManager
  • $hostContainer = Get-DataCenter -Name DataCenterName

bulklicense

  • Note this is as far as I can go as I don’t have any license keys 🙂

Utilise AutoDeploy cmdlets to deploy ESXi Hosts

index

Introduction

When you start a physical host set up for Auto Deploy, Auto Deploy uses a PXE boot infrastructure in conjunction with vSphere host profiles to provision and customize that host. No state is stored on the host itself, instead, the Auto Deploy server manages state information for each host.

  • The ESXi host’s state and configuration is run in memory
  • When the host is shutdown the state information is cleared from memory
  • Based on PXE Boot environments
  • Works with Image Buillder, vCenter Server and Host Profiles
  • Eliminates the need for a boot device
  • Common image across all hosts

With Autodeploy the previous boot device information is stored on the host and managed by vCenter

Image

Autodeploy Architecture

Capture

What does what?

Capture

 Rules engine

You specify the behavior of the Auto Deploy server by using a set of rules written in Power CLI. The Auto Deploy rule engine checks the rule set for matching host patterns to decide which items (image profile, host profile, or vCenter Server location) to provision each host with.

PowerCLI cmdlets are used to set, evaulate and update image profile and host profile rules

The Rules engine maps software images and host profiles to hosts based on the attributes of the host. For example

  • Rules can be based on IP or MAC Address
  • The -AllHosts option can be used for every host

What’s in the Rules engine?

Capture

What else is required?

req

Boot Process

process

AutoDeploy First Boot Process

Capture

AutoDeploy cmdlets

cmdlets

 Procedure

  • Log into PowerCLI and follow the below steps
  • Note to be careful with syntax and case sensitivity

Capture2

Demo

  • Log into PowerCLI
  • Type add-esxsoftwaredepot E:\Depot\VMware-ESXi-5.1.0-799733-depot.zip
  • Type get-esximageprofile
  • Type new-deployrule -name testruleimageprofile – item VMware-ESXi-5.1.0-799733-standard -allhosts

The above commands will add a software depot, get the ESXi image profiles then create a deployment rule named “testruleimageprofile” and will use the “VMware-ESXi-5.1.0-799733-standard” image profile or type in the customprofile you have created and will apply the rule to “Allhosts” or any ESXi host boots from it.

depot

  • Or
  • Log into PowerCLI
  • Type add-esxsoftwaredepot E:\Depot\VMware-ESXi-5.1.0-799733-depot.zip
  • Type get-esximageprofile
  • Type new-deployrule -name testruleimageprofile – item “ESXi-5.1.0-799733-standard”,”Cluster”,”DACVESX001 Host Profile” -pattern “ipv4=10.1.1.100-10.1.1.105”

The above commands will add a software depot, get the ESXi image profiles then create a deployment rule named “testruleimageprofile” and will use the “VMware-ESXi-5.1.0-799733-standard” image profile,  The Cluster Name “Cluster” and the Host Profile Name DACVESX001 Host Profile with a pattern to apply this to the IP range 10.1.1.100-10.1.1.105

autodeploynewrule

  • Click Enter and you should see the following screen appear

Capture

  • Add the second cluster rule

depot3

  • Once the deployment rules have been created successfully, add them to the working rule set by using the Add-DeployRule cmdlet. The following example adds the two deployment rules created previously to the working rule set
  • Add the Rules to the Working Rule Set
  • By default deploy rules are added to the active rule set. You add rules to the working rule set by including the -NoActivate flag when using the Add-DeployRule cmdlet.

depot2

  • Use the Get-DeployRuleSet to verify the rules were created

depot4

  • When the deployment rules have been added to the working rule set successfully, vSphere Auto Deploy will commence copying VIBs to the Auto Deploy server as required. In our case the VIBs associated with Brocade will be copied
  • Type Exit to Quit PowerCLI

Install the Auto Deploy Server

robot

What is Auto Deploy?

vSphere Auto Deploy can provision hundreds of physical hosts with ESXi software. You can specify the image to deploy and the hosts to provision with the image. Optionally, you can specify host profiles to apply to the hosts, and a vCenter Server folder or cluster for each host.

When a physical host set up for Auto Deploy is turned on, Auto Deploy uses a PXE boot infrastructure in conjunction with vSphere host profiles to provision and customize that host. No state is stored on the host itself. Instead, the Auto Deploy server manages state information for each host

Auto Deploy stores the information for the ESXi hosts to be provisioned in different locations. Information about the location of image profiles and host profiles is initially specified in the rules that map machines to image profiles and host profiles. When a host boots for the first time, the vCenter Server system creates a corresponding host object and stores the information in the database.

AutoDeploy Requirements

  • DHCP
  • DHCP Option 66: FQDN or IP Address of TFTP Server
  • DHCP Option 67: udionly.kpxe.vmw-hardwired – Name of GPXE Config file which we need the TFTP Server to direct the host to
  • Router Configuration – A setting to allow the DHCP
  • PXE
  • TFTP

Installation Instructions

  • Attach the vCenter ISO
  • Select AutoDeploy

Autodeploy1

  • Click Next

Autodeploy2

  • Click Next to the End User Patent Agreement

Autodeploy3

  • Click I accept to the Licensing agreement

Autodeploy4

  • Check Auto Deploy repository directory and repository maximum size

Autodeploy5

  • Put in vCenter Information

Autodeploy6

  • Trust the SSL Certificate

Autodeploy7

  • Check the ports

Autodeploy8

  • Check how your server is identified on the network

Autodeploy9

  • nn

Autodeploy10

  • Finish