Archive for March 2015

Resetting LUNS on vSphere 5.5

lunreset

The Issue

Following a networking change there was a warm start on our IBM V7000 storage nodes\cannisters that caused an outage to the VMware environment in the sense that locks on certain LUNs caused a mini-APD (all Paths Down) This issue occurs if the ESXi/ESX host cannot reserve the LUN. The LUN may be locked by another host (an ESXi/ESX host or any other server that has access to the LUN). Typically, there is nothing queued for the LUN. The reservation is done at the SCSI level.

Caution: The reserve, release, and reset commands can interrupt the operations of other servers on a storage area network (SAN). Use these commands with caution.

Note: LUN resets are used to remove all SCSI-2 reservations on a specific device. A LUN reset does not affect any virtual machines that are running on the LUN.

Instructions

  • SSH into the host and type esxcfg-scsidevs -c to verify that the LUN is detected by the ESX host at boot time. If the LUN is not listed then rescan the storage

lunreseta

  • Next type cat /var/log/vmkernel.log
  • press Shift+G to reach the end of the file

lunresetb

  • You will see messages in the log such as below
  • x0b1800, oxid xffff SCSI Reservation Conflict –
    2015-01-23T18:59:57.061Z cpu63:32832)lpfc: lpfc_scsi_cmd_iocb_cmpl:2057: 3:(0):3271: FCP cmd x16 failed <0/4> sid x0b2700, did
  • You will need to find the naa ID or the vml ID of the LUNs you need to reset.
  • You can do this by running the command esxcfg-info | egrep -B5 “s Reserved|Pending”
  • The host that has Pending Reserves with a value that is larger than 0 is holding the lock.

lunreset3

  • We then had to run the below command to reset the LUNs
  • vmkfstools -L lunreset /vmfs/devices/disks/naa.60050768028080befc00000000000116

lunresetc

  •  Then run vmkfstools -V to rescan
  • Occasionally you may need to restart the management services on particular hosts by running /sbin/services.sh restart in a putty session then restart the vCenter service but it depends on your individual situation

VSAN 5.5

vsanlogo.bmp

What is Software defined Storage?

VMware’s explanation is “Software Defined Storage is the automation and pooling of storage through a software control plane, and the ability to provide storage from industry standard servers. This offers a significant simplification to the way storage is provisioned and managed, and also paves the way for storage on industry standard servers at a fraction of the cost.

(Source:http://cto.vmware.com/vmwares-strategy-for-software-defined-storage/)

SAN Solutions

There are currently 2 types of SAN Solutions

  • Hyper-converged appliances (Nutanix, Scale Computing, Simplivity and Pivot3
  • Software only solutions. Deployed as a VM on top of a hypervisor (VMware vSphere Storage Appliance, Maxta, HP’s StoreVirtual VSA, and EMC Scale IO)

VSAN 5.5

VSAN is also a software-only solution, but VSAN differs significantly from the VSAs listed above. VSAN sits in a different layer and is not a VSA-based solution.

vsan01

VSAN Features

  • Provide scale out functionality
  • Provide resilience
  • Storage policies per VM or per Virtual disk (QOS)
  • Kernel based solution built directly in the hypervisor
  • Performance and Responsiveness components such as the data path and clustering are in the kernel
  • Other components are implemented in the control plane as native user-space agents
  • Uses industry standard H/W
  • Simple to use
  • Can be used for VDI, Test and Dev environments, Management or DMZ infrastructure and a Disaster Recovery target
  • 32 hosts can be connected to a VSAN
  • 3200 VMs in a 32 host VSAN cluster of which 2048 VMs can be protected by vSphere HA

VSAN Requirements

  • Local host storage
  • All hosts must use vSphere 5.5 u1
  • Autodeploy (Stateless booting) is not supported by VSAN
  • VMkernel interface required (1GbE) (10gBe recommended) This port is used for inter-cluster node communication. It is also used for reads and writes when one of the ESXi hosts in the cluster owns a particular
    VM but the actual data blocks making up the VM files are located on a different ESXi host in the cluster.
  • Multicast is enabled on the VSAN network (Layer2)
  • Supported on vSphere Standard Switches and vSphere Distributed Switches)
  • Performance Read/Write buffering (Flash) and Capacity (Magnetic) Disks
  • Each host must have at least 1 Flash disk and 1 Magnetic disk
  • 3 hosts per cluster to create a VSAN
  • Other hosts can use the VSAN without contributing any storage themselves however it is better for utilization, performance and availability to have a uniformly contributed cluster
  • VMware hosts must have a minimum of 6GB RAM however if you are using the maximum disk groups then 32GB is recommended
  • VSAN must use a disk controller which is capable of running in what is commonly referred to as pass-through mode, HBA mode, or JBOD mode. In other words, the disk controller should provide the capability to pass up the underlying magnetic disks and solid-state disks (SSDs) as individual disk drives without a layer of RAID sitting on top. The result of this is that ESXi can perform operations directly on the disk without those operations being intercepted and interpreted by the controller
  • For disk controller adapters that do not support pass-through/HBA/JBOD mode, VSANsupports disk drives presented via a RAID-0 configuration. Volumes can be used by VSAN if they are created using a RAID-0 configuration that contains only a single drive. This needs to be done for both the magnetic disks and the SSDs

VMware VSAN compatibility Guide

VSAN has strict requirements when it comes to disks, flash devices, and disk controllers which can be complex. Use the HCL link below to make sure you adhere to all supported hardware

http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan

The designated flash device classes specified within the VMware compatibility guide are

  • Class A: 2,500–5,000 writes per second
  • Class B: 5,000–10,000 writes per second
  • Class C: 10,000–20,000 writes per second
  • Class D: 20,000–30,000 writes per second
  • Class E: 30,000+ writes per second

Setting up a VSAN

  • Firstly all hosts must have a VMKernel network called Virtual SAN traffic
  • You can add this port to an existing VSS or VDS or create a new switch altogether

vsan02

  • Log into the web client and select the first host
  • Click Manage > Networking > Click the Add Networking button

vsan04

  • Keep VMKernel Network Adaptor selected

vsan05

  • On my options I only have 2 options but you will usually have the option to select an existing distributed port group

vsan06

  • Check the settings, put in a network label and tick Virtual SAN traffic

vsan07

  • Enter your network settings

vsan08

  • Check Settings and Finish

vsan09

  • You should now see your VMKernel Port on your switch

vsan10

  • Next click on the cluster to build a new VSAN Cluster
  • Go to Manage > Settings > Virtual SAN > General > Edit

vsan11

  • Next turn on the Virtual SAN. Automatic mode will claim all virtual disks or you can choose Manual Mode

vsan12

  • You will need to turn off vSphere HA to turn on/off VSAN
  • Check that Virtual SAN is turned on

vsan13

  • Next Click on Disk Management to create Disk Groups
  • Then click on the Create Disk Group icon (circled in blue)

vsan14

  • The disk group must contain one SSD and up to 6 hard drives.
  • Repeat this for at least 3 hosts in the cluster

VSAN16

  • Next click on Related Objects to view the Datastore

vsan16

  • Click the VSAN Datastore to view the details
  • Note I have had to use VMwares screenprint as I didn’t have enough resources in my lab to show this

VSAN18

Links