Archive for vSAN

VMware Ruby Virtual Console for vSAN 6.6

VMware Ruby Virtual Console for vSAN 6.6

The Ruby vSphere Console (RVC) is an interactive command-line console user interface for VMware vSphere and Virtual Center.

The Ruby vSphere Console comes bundled with both the vCenter Server Appliance (VCSA) and the Windows version of vCenter Server. RVC is quickly becoming one of the primary tools for managing and troubleshooting Virtual SAN environments

How to begin

  • To begin using the Ruby vSphere Console to manage your vSphere infrastructure, deploy the vCenter Server Appliance and configure network connectivity for the appliance.
  • Afterwards, SSH using Putty or an app you prefer to the dedicated vCenter Server Appliance and login as a privileged user. No additional configuration is required to begin.
  • Commands such as ‘cd’ and ‘ls’ work fine and if you want to return to the previous directory type ‘cd .. and press Enter

How to Login

RVC credentials are directly related to the default domain setting in SSO (Single Sign-On). Verify the default SSO Identity Source is set to the desired entity.

So there are a few different ways to logon potentially either locally or with domain credentials. Examples below

  • rvc administrator@vsphere.local@localhost
  • rvc root@localhost
  • rvc administrator@techlab.local@localhost

Where to go from here

You are now at the root of the virtual filesystem.

  • To access and navigate through the system type ‘cd 0‘ to access the root (/) directory or ‘cd 1‘ to access the ‘localhost/’ directory. You can type the ‘ls’ command to list the contents of a directory. I am going to type ‘cd 1‘ to access my localhost directory so lets see what we have.

  • Type ls to see what directory structure we have now. You should now see your datacenter or datacenters

  • Change directory by typing cd 0 to the relevant datacenter and you will now see the following folder structure.

  • Type ls to see the structure of this folder

  • Type cd 1 to change to the Computers folder where we will see the cluster and then type ls

  • We can now use a command to check the state of the vSAN cluster. You don’t want to enter the command ‘vsan.check_state vsan-cluster’ as that will not work. The number ‘0’ is what you need to use to look at the state of the cluster so type vsan.check_state 0

  • Next look at the vSAN Object Status Report. Type vsan.obj_status_report 0

  • We can also run the command vsan.obj_status_report 0 -t which displays a table with more information about vSAN objects

  • Next look at a detailed view of the cluster. Type vsan.cluster_info 0

  • Next we’ll have a look at disk stats. Type vsan.disks_stats 0

  • Next have a look at simulating a failure of a host on your vSAN cluster. type vsan.whatif_host_failures 0

  • You can also type vsan.whatif_host_failures -s 0

  • You can also view VM performance by typing vsan.vm_perf_stats “vm” This command will sample disk performance over a period of 20 seconds. This command will provide you with ‘read/write’ information IOPS, throughput and latency

Using vSAN Observer

Click me –> https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2064240

To generate a performance statistics bundle over a one hour period at 30 second intervals for a vSAN cluster named vSAN and save the generated statistics bundle to the /tmp folder, run this command:

  • Log into rvc
  • Navigate down to Computers
  • Type the following vsan.observer ~/computers/clustername(fill this in)/ –run-webserver –force –generate-html-bundle /tmp –interval 30 –max-runtime 1
  • While this is running, you can log into a web browser and run http://vCentername:8010 which will provide multiple graphs and information you can view
  • Press Control C to stop if you want to stop this prior to the test ending.

Inaccessible objects or orphaned objects

If you get an issue like I did with an orphaned object then browse through the vSAN datastore in the Web Client and find the GUID of the object and run the following command on the hosts. Take care you have the correct GUID!

  • /usr/lib/vmware/osfs/bin/objtool delete -u 5825a359-2645-eb1e-b109-002564f9b0c2 -f -v 10
  • Give it a minute and you will see it vanish from your vSAN datastore

Useful Commands

Within the RVC Console, type in vsan. then press tab twice to get the whole list of vsan commands you can use.

On the hosts the following commands can be useful

  • cd /etc/init.d/vsanmgmtd status
  • cd /etc/init.d/vsanmgmtd restart
  • services.sh restartcd  /vmfs/volumes then ls
  • vsish -e set /vmkmodules/vsan/dom/ownerabdicate “naa id”

 

 

Using HCI Bench v1.6.3 to performance test vSAN 6.6

vSAN Load Testing Tool: HCI Bench

VMware has a vSAN Stress and Load testing tool called HCIBench, which is provided via VMware’s fling capability. HCIbench can be run in versions 5.5 and upwards today as a replacement for the vSAN Proactive tests which are inbuilt into vSAN currently. I am running this against vSphere 6.5/vSAN 6.6 today. HCIBench provides more flexibility in defining a target performance profile as input and test results from HCIBench can be viewed in a web browser and saved to disk.

HCIBench will help simplify the stress testing task, as HCIBench asks you to specify your desired testing parameters (size of working set, IO profile, number of VMs and VMDKs, etc.) and then spawns multiple instances of Vdbench on multiple servers. If you don’t want to configure anything manually there is a button called Easyrun which will set everything for you. After the test run is done, it conveniently gathers all the results in one place for easy review and resets itself for the next test run.

HCIBench is not only a benchmark tool designed for vSAN, but also could be used to evaluate the performance of all kinds of Hyper-Converged Infrastructure Storage in vSphere environment.

Where can I can find HCI Bench?

There is a dedicated fling page which will provide access to HCIBench and its associated documentation. A zip file containing the Vdbench binaries from Oracle will also be required to be downloaded which can be done through the configuration page after the appliance is installed. You will need to register an account with Oracle to download this file but this doesn’t take long.

HCIBench Download: labs.vmware.com/flings/hcibench

HCIBench User Guidehttps://download3.vmware.com/software/vmw-tools/hcibench/HCIBench_User_Guide.pdf

Requirements

  • Web Browser: IE8+, Firefox or Chrome
  • vSphere 5.5 and later environments for both HCIBench and its client VMs deployment

HCIBench Tool Architecture

The tool is specifically designed for running performance tests using Vdbench against a vSAN datastore.
It is delivered in the form of Open Virtualization Appliance (OVA) that includes the following components:

The test Controller VM is installed with:

  • Ruby vSphere Console (RVC)
  • vSAN Observer
  • Automation bundle
  • Configuration files
  • Linux test VM template

The Controller VM has all the needed components installed. The core component is RVC (https://github.com/vmware/rvc) with some extended features enabled. RVC is the engine of this performance test tool, responsible for deploying Vdbench Guest VMs, conducting Vdbench runs, collecting results, and monitoring vSAN by using vSAN Observer.

Pre-requisites

Before deploying this performance test tool packaged as OVA, make sure the environment meets the following requirements:

The vSAN Cluster is created and configured properly

  • The network for Vdbench Guest VMs is ready, and needs to have DHCP service enabled; if the network doesn’t have DHCP service, “Private Network” must be mapped to the same network when HCIBench being deployed.
  • The vSphere environment where the tool is deployed can access the vSAN Cluster environment to be tested
  • The tool can be deployed into any vSphere environment. However, we do not recommend deploying it into the vSAN Cluster that is tested to avoid unnecessary resource consumption by the tool.

What am I benchmarking?

This is my home lab which runs vSAN 6.6 on 3 x Dell Poweredge T710 servers each with

  • 2 x 6 core X5650 2.66Ghz processors
  • 128GB RAM
  • 6 x Dell Enterprise 2TB SATA 7.2k hot plug drives
  • 1 x Samsung 256GB SSD Enterprise 6.0Gbps
  • Perc 6i RAID BBWC battery-backed cache
  • iDRAC 6 Enterprise Remote Card
  • NetXtreme II 5709c Gigabit Ethernet NIC

Installation Instructions

  • Download the HCIBench OVA from https://labs.vmware.com/flings/hcibench and deploy it to your vSphere 5.5 or later environment.
  • Because the vApp option is used for deployment, HCIBench doesn’t support deployment on a standalone ESXi host, the ESXi host needs to be managed by a vCenter server.
  • When configuring the network, if you don’t have DHCP service on the VLAN that the VDBench client VMs will be deployed on, the “Private Network” needs to be mapped to the same VLAN because HCIBench will be able to provide the DHCP service.
  • Log into vCenter and go to File > Deploy OVF File

  • Name the machine and select a deployment location

  • Select where to run the deployed template. I’m going to run it on one of my host local datastores as it is recommended to run it in a location other than the vSAN.

  • Review the details

  • Accept the License Agreement

  • Select a storage location to store the files for the deployed template

  • Select a destination network for each source network
  • Map the “Public Network” to the network which the HCIBench will be
    accessed through; if the network prepared for Vdbench Guest VM doesn’t have DHCP service, map the “Private Network” to the same network, otherwise just ignore the “Private Network”.

  • Enter the network details. I have chosen static and filled in the detail as per below. I have a Windows DHCP Server on my network which will issue IP Addresses to the worker VMs.
  • Note: I added the IP Address of the HCIBench appliance into my DNS Server

  • Click Next and check all the details

  • The OVF should deploy. If you get a failure with the message. “The OVF failed to deploy. The ovf descriptor is not available” then redownload the OVA and try again and it should work.

  • Next power on the Controller VM and go to your web browser and navigate to your VM using http://<Your_HCIBench_IP>:8080. In my case http://192.168.1.116:8080. Your IP is the IP address you gave it during the OVF deployment or the DHCP address it picked up if you chose this option. If it asks you for a root password, it is normally what you set in the Deploy OVF wizard.
  • Log in with the root account details you set and you’ll get the Configuration UI

  • Go down the whole list and fill in each field. The screen-print shows half the configuration
  • Fill in the vCenter IP or FQDN
  • Fill in the vCenter Username as username@domain format
  • Fill in the Center Password
  • Fill in your Datacenter Name
  • Fill in your Cluster Name
  • Fill in the network name. If you don’t fill anything in here, it will assume the “VM Network” Note: This is my default network so I left it blank.
  • You’ll see a checkbox for enabling DHCP Service on the network. DHCP is required for all the Vdbench worker VMs that HCIBench will produce so if you don’t have DHCP on this network, you will need to check this box so it will assign addresses for you. As before I have a Windows DHCP server on my network so I won’t check this.

  • Next enter the Datastore name of the datastore you want HCIBench to test so for example I am going to put in vsanDatastore which is the name of my vSAN.
  • Select Clear Read/Write Cache Before Each Testing which will make sure that test results are not skewed by any data lurking in the cache. It is designed to flush the cache tier prior to testing.
  • Next you have the option to deploy the worker VMs directly to the hosts or whether HCIBench should leverage vCenter

If this parameter is unchecked, ignore the Hosts field below, for the Host Username/Password fields can also be ignored if Clear Read/Write Cache Before Each Testing is unchecked. In this mode, a Vdbench Guest VM is deployed by the vCenter and then is cloned to all hosts in the vSAN Cluster in a round-robin fashion. The naming convention of Vdbench Guest VMs deployed in this mode is
“vdbench-vc-<DATASTORE_NAME>-<#>”.
If this parameter is checked, all the other parameters except EASY RUN must be specified properly.
The Hosts parameter specifies IP addresses or FQDNs of hosts in the vSAN Cluster to have Vdbench Guest VMs deployed, and all these hosts should have the same username and password specifed in Host Username and Host Password. In this mode, Vdbench Guest VMs are deployed directly on the specified hosts concurrently. To reduce the network traffic, five hosts are running deployment at the same time then it moves to the next five hosts. Each host also deploys at an increment of five VMs at a time.

The naming convention of test VMs deployed in this mode is “vdbench-<HOSTNAME/IP>-<DATASTORE_NAME>-batch<VM#>-<VM#>”.

In general, it is recommended to check Deploy on Hosts for deployment of a large number of testVMs. However, if distributed switch portgroup is used as the client VM network, Deploy on Hosts must be unchecked.
EASY RUN is specifically designed for vSAN user, by checking this, HCIBench is able to handle all the configurations below by identifying the vSAN configuration. EASY RUN helps to decide how many client VMs should be deployed, the number and size of VMDKs of each VM, the way of preparing virtual disks before testing etc. The configurations below will be hidden if this option is checked.

  • You can omit all the host details and just click EASYRUN

  • Next Download the vDBench zip file and upload it as it is. Note: you will need to create yourself an Oracle account if you do not have one.

  • It should look like this. Click Upload

  • Click Save Configuration

  • Click Validate the Configuration.Note at the bottom, it is saying to “Deploy on hosts must be unchecked” when using fully automated DRS. As a result I changed my cluster DRS settings to partially automated and then I got the correct message below when I validated again.

  • If you get any issues, please look at the Pre-validation logs located here – /opt/automation/logs/prevalidation

  • Next we can start a Test. Click Test

  • You will see the VMs being deployed in vCenter

  • And more messages being shown

  • It should finish and say Test is finished

Results

  • After the Vdbench testing finishes, the test results are collected from all Vdbench instances in the test VMs. And you can view the results at http://HCIBench_IP/results in a web browser and/or clicking the results button from the testing window.
  • You can also click Save Result and save a zip file of all the results
  • Click on the easy-run folder

  • Click on the .txt file

  • You will get a summarized results file

  • Click on the other folder

  • You can also see the individual vdBench VMs statistics by clicking on

  • You can also navigate down to what is a vSAN Observer collection. Click on the stats.html file to display a vSAN Observer view of the cluster for the period of time that the test was running

  • You will be able to click through the tabs to see what sort of performance, latency and throughput was occurring.

  • Enjoy and check you are getting the results you would expect from your storage

Useful Links

  • Comments from the HCIBench fling site which may be useful for troubleshooting

https://labs.vmware.com/flings/hcibench/comments

  • If you have questions or need help with the tool, please email VSANperformance@vmware.com
  • Information about the back-end scripts in HCIBench thanks to Chen Wei

Use HCIBench Like a Pro – Part 2

 

 

3 x Dell Poweredge T710 lab with a bootstrapped install of vCenter 6.5, embedded PSC and vSAN 6.6

vCenter 6.5/vSAN 6.6 new install

This is a blog based on my Dell Poweredge T710 lab which I’ve set up to take advantage of testing vSphere 6.5 and vSAN 6.6 as a combined install of a new installation which should bootstrap vSAN, create a vCenter and then place the vCenter on the vSAN automatically.

Note: vSAN will be a hybrid configuration of 1 x SSD and 6 SATA hot plug drives per server.

New integrated bootstrapping feature explained

In some environments where new hardware being deployed, high availability shared storage may not be accessible during day-zero installation meaning if you were building a greenfield deployment, it was almost a catch 22 scenario. How did you build your vSAN with a vCenter server when you only had the disks for a vSAN deployment. There were ways around this via command line but it has now been built into the functionality of vSphere 6.5/vSAN 6.6.

Local disk, if available, can be used as a temporary location for vCenter installation, but vCenter migration after bringing up the cluster could be time consuming and error prone. Bootstrapping vSAN without vCenter can solve this problem and remove the requirement to have high availability storage or temporary local disk at day-zero operations. This could be applicable to a greenfield deployment scenario. With the Bootstrapping vSAN method, a vSAN based datastore can be made available at day-zero operation to bring-up all management components.

Lab Setup

3 x Dell Poweredge T710 servers each with

  • 2 x 6 core X5650 2.66Ghz processors
  • 128GB RAM
  • 6 x Dell Enterprise 2TB SATA 7.2k hot plug drives
  • 1 x Samsung 256GB SSD Enterprise 6.0Gbps
  • Perc 6i RAID BBWC battery-backed cache
  • iDRAC 6 Enterprise Remote Card
  • NetXtreme II 5709c Gigabit Ethernet NIC

Initial Steps for each 3 hosts

  • The Perc 6i controller is not on the vSAN HCL but vSAN can still be setup using RAID0 passthrough which involves configuring a RAID0 volume for each drive in the BIOS (Ctrl + R at bootup) Always make sure the drive is initialized in the BIOS which clears any previous content because vSAN requires the drives to be empty. Press Control > R during boot up and access the Virtual Disk Management screen to create disks as RAID0. See the link below for full information

https://community.spiceworks.com/how_to/8781-configuring-virtual-disks-on-a-perc-5-6-h700-controller

  • In the System Setup BIOS screen you will need to enable Virtualization Technology.  Not enabled by default and will stop any VMs from powering on if not enabled

  • Make sure you have an AD/DNS Server with entries for your hosts and vCenter
  • Put in your license keys
  • Disks may not come up marked as SSD. In this case I had to run the following commands on each server (Replace your disk naa id with yours and the SATP Type)

Find your disk information as per below command but you can also find the disk ID’s in the host client

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013188

  • Your SSD disks should then come up marked as SSD. I didn’t have to reboot.

Install the vCenter Appliance

  • Make sure you have the software downloaded. I’m using the VMware-VCSA-all-6.5.0-5705665.iso
  • On another machine, mount the VMware-VCSA-all-6.5.0-5705665.iso. I connected this to my Windows 10 laptop as a virtual drive. Start the vCenter Server Appliance 6.5 installer located at \vcsa-ui-installer\win32

  • Select Install from the VMware vCenter Server Appliance 6.5 Installer.

  • You will see the Introduction screen

  • Accept the License Agreement
  • Select Deployment Type. For now I’m going to use an embedded Platform Service Controller

  • Enter the details for the appliance target. Try an IP Address if a FQDN doesn’t work.

  • Accept the certificate

  • Put in a root password for the vCenter Server Appliance

  • Select a deployment size

  • There are now 2 deployment types. You can install as normal or you can “Install on a new Virtual SAN cluster containing the target host”

  • I am going to test this new feature of a combined install of vCenter and vSAN placing the vCenter on vSAN
  • Put in a name for your Datacenter and Cluster and click Next. It will say Loading

  • Claim disks for Virtual SAN. You can see it has picked up all my disks on my first host and recognizes the SSD and sets it as a cache Disk while the other non SSD Disks are set as Capacity Disks

  • Next enter your network settings

  • You are now ready to complete at Stage 1. Check the settings and click Finish

  • It will now show the following screen

  • When it has finished you should see the below screen

  • Click Continue and we will be on to Stage 2

  • Next Enter the time settings. You can use NTP Servers or sync with the local host. You can also enable SSH

  • Next set up the embedded PSC

  • Next decide if you want to join the Customer Experience Program

  • Finish and check the config

  • You should now see the below screen

  • When it has finished you will see the below screen

  • Next connect to the vCenter appliance with the administrator@vsphere.local account and the password you set up previously

https://techlabvca001.techlab.local/vsphere-client/

  • Select the Host > Select Configure > Select Networking > VMkernel Adapters

  • Select a switch

  • Add a VMkernel adapter for vSAN

  • Specify VMkernel networking drivers

  • Check Settings and Finish

  • Next I need to add my other 2 hosts to the Datacenter and create a vSAN VMkernel port on each host followed by adding them into the cluster

  • Click on the cluster > Select Configure > vSAN > Disk Management and select your disks on the other servers and make them either the cache disk or capacity disk

  • This process is normally quite quick and once complete you should have your vSAN up and running!

  • Click on the cluster > Select Configure > Services and Edit Settings to turn on HA and DRS

  • Once everything is looking ok click on the cluster > vSAN > General > Configuration Assist to check any errors or warnings about any issues so you can fix these.

Procedure to shutdown the vSAN cluster and start it up again.

So it crossed my mind that as this is my lab, it is not going to be running 24×7 or my house is going to be rather warm and my electricity bill will definitely rise! I need to power it off therefore what is the correct way to shut everything down and power up again?

Normally to shutdown an ESXi Cluster, using vCenter Webclient, ESXi hosts have to be put into maintenance mode and then ESXi hosts are powered off. Also to start ESXi Cluster, vCenter Server is used to remove them from maintenance mode after powering on the hosts. However if the VSAN cluster is running management components such as vCenter Server and other management VMs, the ESXi host that is running vCenter Server cannot be put into maintenance mode. So vSAN Cluster shutdown and starting procedures have to be properly sequenced.

VMware KB

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2142676

Steps for Power Off

  • Start by powering off all virtual machines on ESXi cluster except the vCenter Server. If your Management cluster has ActiveDirectory which provides services to vCenter Server, then do not power off Active Directory VM as well
  • Migrate the vCenter Server VM and ActiveDirectory VM(s) to a single ESXi Host
  • Place all the remaining hosts in the cluster into maintenance mode. When confirming the maintenance mode for ESXi Host, ensure the following selection is made ( deselect checkbox for Move powered-off VMs and “No data migration” is chosen for Virtual SAN data migration)
  • You can put the hosts in Maintenance Mode manually as per the baove step or you can use a command line command. You can run the ESXCLI command below to put a host in Maintenance mode. However, you must perform this operation through one of the CLI methods that supports setting the VSAN mode when entering Maintenance Mode. You can either do this by logging directly into the ESXi Shell and running ESXCLI.

esxcli system maintenanceMode set -e true -m noAction

other options are

esxcli system maintenanceMode set -e true -m ensureObjectAccessibility

esxcli system maintenanceMode set -e true -m evacuateAllData

  • Power off the vCenter Server VM and Active Directory VM. At this point, the vSphere WebClient access is lost.
  • Shutdown all ESXi Hosts. This will complete the Shutdown procedure for VSAN Cluster.

Starting the ESXi Hosts and the vSAN back up

The procedure to start a vSAN Cluster begins with the ESXi host where vCenter Server and Active Directory VMs are running.

  • Power on all ESXi hosts in the cluster.
  • Take the hosts out of maintenance mode
  • Identify ESXi host where vCenter Server and Active Directory VMs are located
  • Power on AD servers
  • Power on vCenter server
  • Note: If the vCenter Server VM has a vmnic that is tied to a VDS  network, then vCenter Server can’t be powered on because VM power-on operation on VDS requires vCenter Server to be running. So it is recommended to move any vmnic on vCenter Server to a vSwitch-based network. This can be moved back to the VDS Network afterwards
  • Log into vCenter and check vSAN

Useful Troubleshooting Tools

  • rvc console on vCenter
  • Putty
  • esxcli commands

I had an issue at a customer site where they had put some hosts in Maintenance Mode and when they brought them out again, the hosts came out of Maintenance Mode but the vSAN didn’t resulting in the misreporting of storage in your cluster. As a result storage policies will error and you won’t be able to put any more hosts in maintenance mode if there isn’t the visible storage to move them. Note: you won’t have lost any storage. The system will just think it’s not there until you put the host into Maintenance Mode and take it out again for a second time! VMware are aware of this issue which seems to be present in 6.5U1 however this was a customers automated system. I haven’t seen this happen in my home lab!

By running the command vsan.cluster_info 0 in rvc, you are able to see for every disk whether the node is evacuated or not. If you have taken the host out of Maintenance Mode and the vSAN has also come out of Maintenance Mode then it will say Node evacuated: no. If it hasn’t come out properly it will say Node evacuated: yes (won’t accept any new components)

 

 

 

VSAN 5.5

vsanlogo.bmp

What is Software defined Storage?

VMware’s explanation is “Software Defined Storage is the automation and pooling of storage through a software control plane, and the ability to provide storage from industry standard servers. This offers a significant simplification to the way storage is provisioned and managed, and also paves the way for storage on industry standard servers at a fraction of the cost.

(Source:http://cto.vmware.com/vmwares-strategy-for-software-defined-storage/)

SAN Solutions

There are currently 2 types of SAN Solutions

  • Hyper-converged appliances (Nutanix, Scale Computing, Simplivity and Pivot3
  • Software only solutions. Deployed as a VM on top of a hypervisor (VMware vSphere Storage Appliance, Maxta, HP’s StoreVirtual VSA, and EMC Scale IO)

VSAN 5.5

VSAN is also a software-only solution, but VSAN differs significantly from the VSAs listed above. VSAN sits in a different layer and is not a VSA-based solution.

vsan01

VSAN Features

  • Provide scale out functionality
  • Provide resilience
  • Storage policies per VM or per Virtual disk (QOS)
  • Kernel based solution built directly in the hypervisor
  • Performance and Responsiveness components such as the data path and clustering are in the kernel
  • Other components are implemented in the control plane as native user-space agents
  • Uses industry standard H/W
  • Simple to use
  • Can be used for VDI, Test and Dev environments, Management or DMZ infrastructure and a Disaster Recovery target
  • 32 hosts can be connected to a VSAN
  • 3200 VMs in a 32 host VSAN cluster of which 2048 VMs can be protected by vSphere HA

VSAN Requirements

  • Local host storage
  • All hosts must use vSphere 5.5 u1
  • Autodeploy (Stateless booting) is not supported by VSAN
  • VMkernel interface required (1GbE) (10gBe recommended) This port is used for inter-cluster node communication. It is also used for reads and writes when one of the ESXi hosts in the cluster owns a particular
    VM but the actual data blocks making up the VM files are located on a different ESXi host in the cluster.
  • Multicast is enabled on the VSAN network (Layer2)
  • Supported on vSphere Standard Switches and vSphere Distributed Switches)
  • Performance Read/Write buffering (Flash) and Capacity (Magnetic) Disks
  • Each host must have at least 1 Flash disk and 1 Magnetic disk
  • 3 hosts per cluster to create a VSAN
  • Other hosts can use the VSAN without contributing any storage themselves however it is better for utilization, performance and availability to have a uniformly contributed cluster
  • VMware hosts must have a minimum of 6GB RAM however if you are using the maximum disk groups then 32GB is recommended
  • VSAN must use a disk controller which is capable of running in what is commonly referred to as pass-through mode, HBA mode, or JBOD mode. In other words, the disk controller should provide the capability to pass up the underlying magnetic disks and solid-state disks (SSDs) as individual disk drives without a layer of RAID sitting on top. The result of this is that ESXi can perform operations directly on the disk without those operations being intercepted and interpreted by the controller
  • For disk controller adapters that do not support pass-through/HBA/JBOD mode, VSANsupports disk drives presented via a RAID-0 configuration. Volumes can be used by VSAN if they are created using a RAID-0 configuration that contains only a single drive. This needs to be done for both the magnetic disks and the SSDs

VMware VSAN compatibility Guide

VSAN has strict requirements when it comes to disks, flash devices, and disk controllers which can be complex. Use the HCL link below to make sure you adhere to all supported hardware

http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan

The designated flash device classes specified within the VMware compatibility guide are

  • Class A: 2,500–5,000 writes per second
  • Class B: 5,000–10,000 writes per second
  • Class C: 10,000–20,000 writes per second
  • Class D: 20,000–30,000 writes per second
  • Class E: 30,000+ writes per second

Setting up a VSAN

  • Firstly all hosts must have a VMKernel network called Virtual SAN traffic
  • You can add this port to an existing VSS or VDS or create a new switch altogether

vsan02

  • Log into the web client and select the first host
  • Click Manage > Networking > Click the Add Networking button

vsan04

  • Keep VMKernel Network Adaptor selected

vsan05

  • On my options I only have 2 options but you will usually have the option to select an existing distributed port group

vsan06

  • Check the settings, put in a network label and tick Virtual SAN traffic

vsan07

  • Enter your network settings

vsan08

  • Check Settings and Finish

vsan09

  • You should now see your VMKernel Port on your switch

vsan10

  • Next click on the cluster to build a new VSAN Cluster
  • Go to Manage > Settings > Virtual SAN > General > Edit

vsan11

  • Next turn on the Virtual SAN. Automatic mode will claim all virtual disks or you can choose Manual Mode

vsan12

  • You will need to turn off vSphere HA to turn on/off VSAN
  • Check that Virtual SAN is turned on

vsan13

  • Next Click on Disk Management to create Disk Groups
  • Then click on the Create Disk Group icon (circled in blue)

vsan14

  • The disk group must contain one SSD and up to 6 hard drives.
  • Repeat this for at least 3 hosts in the cluster

VSAN16

  • Next click on Related Objects to view the Datastore

vsan16

  • Click the VSAN Datastore to view the details
  • Note I have had to use VMwares screenprint as I didn’t have enough resources in my lab to show this

VSAN18

Links

Optimization WordPress Plugins & Solutions by W3 EDGE