Tag Archive for vDS

VMware Hosts “Out of Sync” message on vDS


The Problem

A host’s VDS Status says Out of Sync in the Networking View

If network connectivity is interrupted between the vCenter Server and one or more hosts, a synchronization interval may be missed resulting in this alert being displayed. This type of interruption can occur during vCenter Service restarts, vCenter Server reboots as well as ESX/ESXi host reboots or network maintenance.


The Solution

If vCenter Server or an ESX/ESXi host has been recently restarted, this message is benign and can be safely ignored. Within several minutes, the host’s vNetwork Distributed Switch information should synchronize with vCenter Server, and the warning clears.
To manually synchronize the host vDS information from the vSphere Client:
  1. In the Inventory section, click Home > Networking.
  2. Select the vDS displaying the alert and then click the Hosts tab.
  3. Right-click the host displaying the Out of sync warning and then click Rectify vNetwork Distributed Switch Host.

To manually synchronize the host vDS information from the vSphere web client (vSphere 5.5):

  1. Click affected host from the Host inventory tab.
  2. Click the Manage tab.
  3. Click Networking.
  4. Click Virtual Switches.
  5. Click the out-of-sync Virtual Distributed Switch in the list of virtual switches.
  6. A new button with an icon of a server and a red icon of a switch appears, click this button to synchronize the referenced distributed virtual switch.The synchronization task appears in the Running Tasks window. You can monitor the progress of the synchronization there.


Given a set of network requirements, identify the appropriate distributed switch technology to use


Switch Options

Basically everything comes down to cost, manageability and familiarity and the business requirements for the features provided by each option. I have attached a link below to a very handy comparison breakdown of vSphere switches compared to the Cisco 1000V. There are too many features to place in a blog!

  • Standard Switch

The VMware vSphere Standard Switch (VSS) is the base-level virtual networking alternative. It extends the familiar appearance, configuration, and capabilities of the standard virtual switch (vSwitch) in VMware vSphere 5.

Standard, Advanced and Enterprise License

  • Distributed Switch

The VMware vSphere Distributed Switch (VDS) extends the feature set of the VMware Standard Switch, while simplifying network provisioning, monitoring, and management through an abstracted, single distributed switch representation of multiple VMware ESX and VMware ESXi™ Servers in a VMware data center. VMware vSphere 5 includes significant advances in virtual switching by providing monitoring, troubleshooting and enhanced NIOC features. VMware vSphere Distributed switch provides flexibility to the I/O resource allocation process by introducing User Defined network resource pools. These new features will help Network Administrators in managing and troubleshooting their virtual infrastructure using familiar tools as well as provide advanced capabilities to manage traffic granularly.

Enterprise Plus License

  • Cisco Nexus 1000v

Cisco Nexus 1000V Series Switches are the result of a Cisco and VMware collaboration building on the VMware vNetwork third-party vSwitch API of VMware VDS and the industry-leading switching technology of the Cisco Nexus Family of switches. Featuring the Cisco® NX-OS Software data center operating system, the Cisco Nexus 1000V Series extends the virtual networking feature set to a level consistent with physical Cisco switches and brings advanced data center networking, security, and operating capabilities to the VMware vSphere environment. It provides end-to-end physical and virtual network provisioning, monitoring, and administration with virtual machine-level granularity using common and existing network tools and interfaces. The Cisco Nexus 1000V Series transparently integrates with VMware vCenter™ Server and VMware vCloud™ Director to provide a consistent virtual machine provisioning workflow while offering features well suited for data center-class applications, VMware View, and other mission-critical virtual machine deployments.

Cisco Nexus 1000v is generally used in large enterprises where the management of firewalls, core- and access switches is in the control of the Network administrators. While the management of the VMware virtual Distributed Switch is in the domain of the vSphere Administrators, with a Cisco Nexus 1000v it is possible to completely separate the management of the virtual switches and hand-over to the network administrators. All this without allowing access to the rest of the vSphere platform to the Network administrators.

Cisco Licensed

  • IBM 5000V

The IBM System Networking Distributed Virtual Switch 5000V is an advanced, feature-rich distributed virtual switch for VMware environments with policy-based virtual machine (VM) connectivity. The IBM Distributed Virtual Switch (DVS) 5000V enables network administrators familiar with IBM System Networking switches to manage the IBM DVS 5000V just like IBM physical switches using advanced networking, troubleshooting and management features so the virtual switch is no longer hidden and difficult to manage.

Support for Edge Virtual Bridging (EVB) based on the IEEE 802.1Qbg standard enables scalable, flexible management of networking configuration and policy requirements per VM and eliminates many of the networking challenges introduced with server virtualization. The IBM DVS 5000V works with VMware vSphere 5.0 and beyond and interoperates with any 802.1Qbg-compliant physical switch to enable switching of local VM traffic in the hypervisor or in the upstream physical switch.

IBM Licensed

Cisco Document comparing all 3 switches except IBM


IBM 5000V Overview Document


Describe the relationship between vDS and vSS


vSphere Standard Switch Architecture

You can create abstracted network devices called vSphere standard switches. A standard switch can..

  1. Route traffic internally between virtual machines and link to external networks
  2. Combine the bandwidth of multiple network adaptors and balance communications traffic among them.
  3. Handle physical NIC failover.
  4. Have a default number of logical ports which for a standard switch is 120. You can
  5. Connect one network adapter of a virtual machine to each port. Each uplink adapter associated with a standard switch uses one port.
  6. Each logical port on the standard switch is a member of a single port group.
  7. Have one or more port groups assigned to it.
  8. When two or more virtual machines are connected to the same standard switch, network traffic between them is routed locally. If an uplink adapter is attached to the standard switch, each virtual machine can access the external network that the adapter is connected to.
  9. vSphere standard switch settings control switch-wide defaults for ports, which can be overridden by port group settings for each standard switch. You can edit standard switch properties, such as the uplink configuration and the number of available ports.

Standard Switch


vSphere Distributed Switch Architecture

A vSphere distributed switch functions as a single switch across all associated hosts. This enables you to set network configurations that span across all member hosts, and allows virtual machines to maintain consistent network configuration as they migrate across multiple hosts

Like a vSphere standard switch, each vSphere distributed switch is a network hub that virtual machines can use.

  • Enterprise Plus Licensed feature only
  • VMware vCenter owns the configuration of the distributed switch
  • Distributed switches can support up to 350 hosts
  • You configure a Distributed switch on vCenter rather than individually on each host
  • Provides support for Private VLANs
  • Enable networking statistics and policies to migrate with VMs during vMotion
  • A distributed switch can forward traffic internally between virtual machines or link to an external network by connecting to physical Ethernet adapters, also known as uplink adapters.
  • Each distributed switch can also have one or more distributed port groups assigned to it.
  • Distributed port groups group multiple ports under a common configuration and provide a stable anchor point for virtual machines connecting to labeled networks.
  • Each distributed port group is identified by a network label, which is unique to the current datacenter. A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional.
  • Network resource pools allow you to manage network traffic by type of network traffic.
  • In addition to vSphere distributed switches, vSphere 5 also provides support for third-party virtual switches.


TCP/IP Stack at the VMkernel Level

The VMware VMkernel TCP/IP networking stack provides networking support in multiple ways for each of the services it handles.

The VMkernel TCP/IP stack handles iSCSI, NFS, and vMotion in the following ways for both Standard and Distributed Virtual Switches

  • iSCSI as a virtual machine datastore
  • iSCSI for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • NFS as a virtual machine datastore.
  • NFS for the direct mounting of .ISO files, which are presented as CD-ROMs to virtual machines.
  • Migration with vMotion.
  • Fault Tolerance logging.
  • Port-binding for vMotion interfaces.
  • Provides networking information to dependent hardware iSCSI adapters.
  • If you have two or more physical NICs for iSCSI, you can create multiple paths for the software iSCSI by configuring iSCSI Multipathing.

Data Plane and Control Planes

vSphere network switches can be broken into two logical sections. These are the data plane and the management plane.

  • The data plane implements the actual packet switching, filtering, tagging, etc.
  • The management plane is the control structure used to allow the operator to configure the data plane functionality.
  • With the vSphere Standard Switch (VSS), the data plane and management plane are each present on each standard switch. In this design, the administrator configures and maintains each VSS on an individual basis.

Virtual Standard Switch Control and Data Plane


With the release of vSphere 4.0, VMware introduced the vSphere Distributed Switch. VDS eases the management burden of per host virtual switch configuration by treating the network as an aggregated resource. Individual host-level virtual switches are abstracted into a single large VDS that spans multiple hosts at the Datacenter level. In this design, the data plane remains local to each VDS, but the management plane is centralized with vCenter Server acting as the control point for all configured VDS instances.

Virtual Distributed Switch Control and Data Plane


Configure vSS and vDS Settings Using Command Line Tools


Valid Commands

Note: With the release of 5.0 and 5.1, the majority of the legacy esxcfg-*/vicfg-* commands have been migrated over to esxcli. At some point, hopefully not in the distant future, esxcli will be parity complete and the esxcfg-*/vicfg-* commands will be completely deprecated and removed including the esxupdate/vihostupdate utilities.

  • esxcfg-nics
  • vicfg-nics
  • esxcfg-route
  • vicfg-route
  • esxcfg-vmknic
  • vicfg-vmknic
  • esxcfg-vswitch
  • vicfg-vswitch
  • esxcli network nic
  • esxcli network interface
  • esxcli network vswitch
  • esxcli network ip

ESXCLI Network Namespaces


network 2


ESXCLI Network Namespace Examples


vCLI Poster of example commands


Migrate a vSS network to a Hybrid or vDS Solution


Hybrid vSS/vDS/Nexus Virtual Switch Environments

Each ESX host can concurrently operate a mixture of virtual switches as follows:

  • One or more vNetwork Standard Switches
  • One or more vNetwork Distributed Switches
  • A maximum of one Cisco Nexus 1000V (VEM or Virtual Ethernet Module).

Note that physical NICs (vmnics) cannot be shared between virtual switches (i.e. each vmnic only be assigned to one switch at any one time)

Examples of Distributed switch configurations

Single vDS

Migrating the entire vSS environment to a single vDS represents the simplest deployment and administration model as per below picture. All VM networking plus VMkernel and service console ports are migrated to the vDS. The NIC teaming policies configured on the DV Port Groups can isolate and direct traffic down the appropriate dvUplinks (which map to individual vmnics on each host)


Hybrid vDS and vSS

The picture below shows an example environment where the VM networking is migrated to a vDS, but the Service Console and VMkernel ports remain on a vSS. This scenario might be preferred for some environments where the NIC teaming policies for the VMs are isolated
from those of the VMkernel and Service Console ports. For example, in the picture, the vmnics and VM networks on vSS-1 could be migrated to vDS-0 while vSS-0 could remain intact and in place.
In this scenario, VMs can still take advantage of Network VMotion as they are located on dv Port Groups on the vDS.


Multiple vDS

Hosts can be added to multiple vDS’s as shown below (Two are shown, but more could be added, with or without vmnic to dvUplink assignments). This configuration might be used to:

  • Retain traffic separation when attached to access ports on physical switches (i.e. no VLAN tagging and switchports are assigned to a single VLAN).
  • Retain switch separation but use advanced vDS features for all ports and traffic types.


Planning the Migration to vDS

Migration from a vNetwork Standard Switch only environment to one featuring one or more vNetwork Distributed Switches can be accomplished in either of two ways:

  • Using only the vDS User Interface (vDS UI) — Hosts are migrated one by one by following the New vNetwork Distributed Switch process under the Home > Inventory > Network view of the Datacenter from the vSphere Client.
  • Using a combination of the vDS UI and Host Profiles— The first host is migrated to vDS and the remaining hosts are migrated to vDS using a Host Profile of the first host.

High Level Overview

The steps involved in a vDS UI migration of an existing environment using Standard Switches to a vDS are as follows:

  • Create vDS (without any associated hosts)
  • Create Distributed Virtual Port Groups on vDS to match existing or required environment
  • Add host to vDS and migrate vmnics to dvUplinks and Virtual Ports to DV Port Groups
  • Repeat Step 3 for remaining hosts


Create a vSphere Distributed Switch

If you have decided that you need to perform a vSS to vDS migration, a vDS needs to be created first.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N)
  3. Highlight the datacenter in which the vDS will be created.
  4. With the Summary tab selected, under Commands, click New vSphere Distributed Switch
  5. On the Switch Version screen, select the appropriate vDS version, ie 5.0.0, click Next.
  6. On the General Properties screen, enter a name and select the number of uplink ports, click Next.
  7. On the Add Hosts and Physical Adapters screen, select Add later, click Next.
  8. On the Completion screen, ensure that Automatically create a default port group is selected, click Finish.
  9. Verify that the vDS and associated port group were created successfully.

Create DV Port Groups

You now need to create vDS port groups. Port groups should be created for each of the traffic types in your environment such as VM traffic, iSCSI, FT, Management and vMotion traffic, as required.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N).
  3. Highlight the vDS created in the previous section.
  4. Under Commands, click New Port Group.
  5. On the Properties screen, enter an appropriate Name, ie IPStorage, Number of Ports and VLAN type and ID (if required), click Next. Note: If the port group is associated with a VLAN, it’s recommended to include the VLAN ID in the port group name
  6. On the completion screen, verify the port group settings, click Finish.
  7. Repeat steps for all required port groups.

Add ESXi Host(s) to vSphere Distributed Switch

After successfully creating a vDS and configuring the required port groups, we now need to add an ESXi host to the vDS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Networking (Ctrl+Shift+N).
  3. Highlight the vDS created previously.
  4. Under Commands, click Add Host.
  5. On the Select Hosts and Physical Adapters screen, select the appropriate host(s) and any physical adapters (uplinks) which are not currently in use on your vSS, click Next. Note: Depending on the number of physical NIC’s in your host, it’s a good idea to leave at least 1 connected to the vSS until the migration is complete. This is particularly relevant if your vCenter Server is a VM.
  6. On the Network Connectivity screen, migrate virtual NICs as required, selecting the associated destination port group on the vDS, click Next.
  7. On the Virtual Machine Networking screen, click Migrate virtual machine networking. Select the VMs to be migrated and the appropriate destination port group(s), click Next..
  8. On the Completion screen, verify your settings, click Finish.
  9. Ensure that the task completes successfully.

Migrate Existing Virtual Adapters (vmkernel ports).

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Virtual Adapters.
  5. On the Manage Virtual Adapters screen, click Add.
  6. On the Creation Type screen, select Migrate existing virtual adapters, click Next.
  7. On the Network Connectivity screen, select the appropriate virtual adapter(s) and destination port group(s), Click Next.
  8. On the Ready to Complete screen, verify the dvSwitch settings, click Finish.

Create New Virtual Adapters (vmkernel ports)

Perform the following steps to create new virtual adapters for any new port groups which were created previously.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Virtual Adapters.
  5. On the Manage Virtual Adapters screen, click Add.
  6. On the Creation Type screen, select New virtual adapter, click Next.
  7. On the Virtual Adapter Type screen, ensure that VMkernel is selected, click Next.
  8. On the Connection Settings screen, ensure that Select port group is selected. Click the dropdown and select the appropriate port group, ie VMotion. Click Use this virtual adapter for vMotion, click Next.
  9. On the VMkernel – IP Connection Settings screen, ensure that Use the following IP settings is selected. Input IP settings appropriate for your environment, click Next.
  10. On the Completion screen, verify your settings, click Finish.
  11. Repeat for remaining virtual adapters, as required.

Migrate Remaining VMs

Follow the steps below to migrate any VMs which remain on your vSS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Right-click the appropriate VM, click Edit Settings.
  4. With the Hardware tab selected, highlight the network adapter. Under Network Connection, click the dropdown associated with Network label. Select the appropriate port group, ie VMTraffic (dvSwitch). Click OK.
  5. Ensure the task completes successfully.
  6. Repeat for any remaining VMs.

Migrate Remaining Uplinks

It’s always a good idea to leave a physical adapter or 2 connected to the vSS, especially when your vCenter Server is a VM. Migrating the management network can sometimes cause issues. Assuming all your VM’s have been migrated at this point, perform the following steps to migrate any remaining physical adapters (uplinks) to the newly created vSS.

  1. From the vSphere Client, connect to vCenter Server.
  2. Navigate to Home > Inventory > Hosts and Clusters (Ctrl+Shift+H).
  3. Select the appropriate ESXi host, click Configuration > Networking (Hardware) > vSphere Distributed Switch.
  4. Click Manage Physical Adapters.
  5. Click Click to Add NIC within the DVUplinks port group.
  6. Select the appropriate physical adapter, click OK.
  7. Click Yes on the remove and reconnect screen.
  8. Click OK.
  9. Ensure that the task completes successfully.
  10. Repeat for any remaining physical adapters
Optimization WordPress Plugins & Solutions by W3 EDGE