Tag Archive for iSCSI

Installing and Configuring iSCSI Target Server on Windows Server 2012

iscsi

What is iSCSI Target Server?

iSCSI Target allows your Windows Server to share block storage remotely. iSCSI leverages the Ethernet network and does not require any specialized hardware. There is a brand new UI integrated with Server manager, along with 20+ cmdlets for easy management.

iSCSI Terms

  • iSCSI:

An industry standard protocol allow sharing block storage over the Ethernet. The server shares the storage is called iSCSI Target. The server (machine) consumes the storage is called iSCSI initiator. Typically, the iSCSI initiator is an application server. For example, iSCSI Target provides storage to a SQL server, the SQL server will be the iSCSI initiator in this deployment.

  • Target:

It is an object which allows the iSCSI initiator to make a connection. The Target keeps track of the initiators which are allowed to be connected to it. The Target also keeps track of the iSCSI virtual disks which are associated with it. Once the initiator establishes the connection to the Target, all the iSCSI virtual disks associated with the Target will be accessible by the initiator.

  • iSCSI Target Server:

The server runs the iSCSI Target. It is also the iSCSI Target role name in Windows Server 2012.

  • iSCSI virtual disk:

It also referred to as iSCSI LUN. It is the object which can be mounted by the iSCSI initiator. The iSCSI virtual disk is backed by the VHD file.

  • iSCSI connection:

iSCSI initiator makes a connection to the iSCSI Target by logging on to a Target. There could be multiple Targets on the iSCSI Target Server, each Target can be accessed by a defined list of initiators. Multiple initiators can make connections to the same Target. However, this type of configuration is only supported with clustering. Because when multiple initiators connects to the same Target, all the initiators can read/write to the same set of iSCSI virtual disks, if there is no clustering (or equivalent process) to govern the disk access, corruption will occur. With Clustering, only one machine is allowed to access the iSCSI virtual disk at one time.

  • IQN:

It is a unique identifier of the Target or Initiator. The Target IQN is shown when it is created on the Server. The initiator IQN can be found by typing a simple “iscsicli” cmd in the command window.

  • Loopback:

There are cases where you want to run the initiator and Target on the same machine; it is referred as “loopback”. In Windows Server 2012, it is a supported configuration. In loopback configuration, you can provide the local machine name to the initiator for discovery, and it will list all the Targets which the initiator can connect to. Once connected, the iSCSI virtual disk will be presented to the local machine as a new disk mounted. There will be performance impact to the IO, since it will travel through the iSCSI initiator and Target software stack when comparing to other local I/Os. One use case of this configuration is to have initiators writing data to the iSCSI virtual disk, then mount those disks on the Target server (using loopback) to check the data in read mode.

Instructions

The aim of this particular blog is to configure an iSCSI Target Disk which my Windows Server 2012 Failover Cluster can use as its Quorum Disk so we will be configuring a 5GB Quorum Disk which we will then present to the Failover Cluster Servers

  • Open Server Manager and click Add Roles and Features

ISCSI1

  • Choose Role based or Feature based installation

iSCSI2

  • Select Destination Server

iSCSI3

  • Select Server Roles > File and Storage Services > File and iSCSI Services > iSCSI Target Server

iSCSI4

  • Add Features that are required for iSCSI Target Server (None ticked here)

iSCSI5

  • Confirm Installation Selections

iSCSI6

  •  To complete iSCSI target server the configuration go to Server Manager , click File and Storage Services > iSCSI
  • Go to iSCSI Virtual disks and click “Launch the New Virtual Disk wizard to create a virtual disk” and walk through the Virtual Disks and targets creation
  • Select an iSCSI virtual disk location

iSCSI7

  • Specify iSCSI virtual disk name

iSCSI8

  • Specify iSCSI virtual disk size

iSCSI9

  • Assign iSCSI Target

iSCSI10

  • Specify Target Name. Underscores are not allowed but it will change them for you

iSCSI12

  • Specify Access Servers

iSCSI14

  • Select a method to identify the initiator

iSCSI13

  • Click Browse and type in the name of the servers which will need to access this virtual disk
  • I have added my 2 Windows Failover Cluster VMs which are called dacvsof001 and dacvsof002

iSCSI15

  • Enable Authentication

iSCSI16

  • Confirm Selections

iSCSI17

  • View Results

iSCSI18

  • Next we need to go to the first Failover Cluster Server dacvsof001 and add the disk
  • On dacvsof001, open Server Manager click Tools and select iSCSI Initiator. When you select this, you will get the following message. Click Yes

iSCSI19

  • Type the Target Server address in which is the server you created the Virtual Disk on and click Quick Connect

iSCSI20

  • You will the Target listed which is available for connection

iSCSI21

  • Click Done
  • Now open Disk Management to make sure that the disk is presented correctly

iSCSI22

  • Right click on this and select Online
  • Right click again and select Initialise
  • Create new Volume. I used Q for Quorum Disk

iSCSI23

  • Now go to the second Windows Failover Cluster Server and do exactly the same thing
  • Leave this disk online and initialised but not given a letter

Configure Software iSCSI Port Bindings

What is Software iSCSI port binding?

Software iSCSI port binding is the process of creating multiple paths between iSCSI adapters and an iSCSI Storage target. By default, ESXi does not setup multipathing for iSCSI adapters. As a result, all targets are accessible by only a single path. This is true regardless of if teaming was setup for your NICS on the VMkernel port used for iSCSI. To ensure that your storage is still accessible in the event of a path failure or to take advantage of load balancing features, Software iSCSI Port Binding is required.

Capture

With the software-based iSCSI implementation, you can use standard NICs to connect your host to a remote iSCSI target on the IP network. The software iSCSI adapter that is built into ESXi facilitates this connection by communicating with the physical NICs through the network stack.

Before you can use the software iSCSI adapter, you must

  • Set up networking
  • Activate the adapter
  • Configure parameters such as discovery addresses and CHAP

Setup Networking

Software and dependent hardware iSCSI adapters depend on VMkernel networking. If you use the software or dependent hardware iSCSI adapeters, you must configure connections for the traffic between the iSCSI component and the physical network adapters. Configuring the network connection involves creating a virtual VMkernel interface for each physical network adapter and associating the interface with an appropriate iSCSI adapter.

If you use a single vSphere standard switch to connect VMkernel to multiple network adapters, change the port group policy, so that it is compatible with the iSCSI network requirements.

By default, for each virtual adapter on the vSphere standard switch, all network adapters appear as active. You must override this port group policy setup, so that each VMkernel interface maps to only one corresponding active NIC. For example

  • vmk1 maps to vmnic1
  • vmk2 maps to vmnic2

Procedure

  • Create a vSphere standard switch that connects VMkernel with physical network adapters designated for iSCSI traffic. The number of VMkernel adapters must correspond to the number of physical adapters on the vSphere standard switch
  • Log in to the vSphere Client and select the host from the inventory panel.
  • Click the Configuration tab and click Networking
  • Select the vSphere standard switch that you use for iSCSI and click Properties.
  • On the Ports tab, select an iSCSI VMkernel adapter and click Edit.
  • Click the NIC Teaming tab and select Override switch failover order.

iscsi

  • Designate only one physical adapter as active and move all remaining adapters to the Unused Adapters category. You will see a Warning Trianlge against your iSCSI VMKernel port if you don’t.
  • Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere standard switch.
  • Next go to the switch properties and click Add and choose VMkernel

vmkernel

  • Type a name. Eg VMkernel-iSCSI

ISCSI1

  •  Enter an IP Address for this adapter

iscsi2

  • Finish and check Summary Page

Setup Software iSCSI Adapter

  • Within the Host View, click the Configuration tab > Storage Adapters
  • Click Add to add a Software iSCSI Adapter
  • Right click the new Software iSCSI Adapter and select Properties

ISCSI3

  • Enable the adapter if it is not already
  • Open the Network Configuration tab
  • Add the new port group(s) associated with the iSCSI network

ISCSI4

  • Click the Dynamic Discovery tab

ISCSI5

  • Add the IP addresses of the ISCSI targets
  • Click Static Discovery and check the details in here

ISCSI6

  • Click Close
  • Rescan the attached disks

What if you have multiple adapters?

  • If your host has more than one physical network adapter for software and dependent hardware iSCSI, use the adapters for multipathing.
  • You can connect the software iSCSI adapter with any physical NICs available on your host. The dependent iSCSI adapters must be connected only with their own physical NICs.
  • Physical NICs must be on the same subnet as the iSCSI storage system they connect to.

The iSCSI adapter and physical NIC connect through a virtual VMkernel adapter, also called virtual network adapter or VMkernel port. You create a VMkernel adapter (vmk) on a vSphere switch (vSwitch) using 1:1 mapping between each virtual and physical network adapter.

One way to achieve the 1:1 mapping when you have multiple NICs, is to designate a separate vSphere switch for each virtual-to-physical adapter pair. The following examples show configurations that use vSphere standard switches, but you can use distributed switches as well.

Capture1

If you use separate vSphere switches, you must connect them to different IP subnets.
Otherwise, VMkernel adapters might experience connectivity problems and the host
will fail to discover iSCSI LUNs

An alternative is to add all NICs and VMkernel adapters to a single vSphere
standard switch. In this case, you must override the default network setup and
make sure that each VMkernel adapter maps to only one corresponding active
physical adapter.

Capture2

General Information on iSCSI Adapters

http://www.electricmonk.org.uk/2012/04/18/using-esxi-with-iscsi-sans/

Identify storage provisioning methods

Overview of Storage Provisioning methods

Storage

Types of Storage

Local (Block Storage)

Local storage can be internal hard disks located inside your ESXi host, or it can be external storage systems located outside and connected to the host directly through protocols such as SAS or SATA. The host uses a single connection to a storage disk. On that disk,
you can create a VMFS Datastore, which you use to store virtual machine disk files.Although this storage configuration is possible, it is not a recommended topology. Using single connections between storage arrays and hosts creates single points of failure (SPOF) that can cause interruptions when a connection becomes unreliable or fails.
ESXi supports a variety of internal or external local storage devices, including SCSI, IDE, SATA, USB, and SAS storage systems. Regardless of the type of storage you use, your host hides a physical storage layer from virtual machines

Local

Networked Storage

Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely. Typically, the host accesses these systems over a high-speed storage network.
Networked storage devices are shared. Datastores on networked storage devices can be accessed by multiple hosts concurrently. ESXi supports the following networked storage technologies.

FC (Block Storage)

Stores virtual machine files remotely on an FC storage area network (SAN). FC SAN is a specialized high-speed network that connects your hosts to high-performance storage devices. The network uses Fibre Channel protocol to transport SCSI traffic from virtual machines to the FC SAN devices.
To connect to the FC SAN, your host should be equipped with Fibre Channel host bus adapters (HBAs). Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route storage traffic.

FCOE (Block Storage)

If your host contains FCoE (Fibre Channel over Ethernet) adapters, you can connect to your shared Fibre Channel devices by using an Ethernet network.

FC

Internet SCSI (iSCSI) (Block Storage)

Stores virtual machine files on remote iSCSI storage devices. iSCSI packages SCSI storage traffic into the TCP/IP protocol so that it can travel through standard TCP/IP networks instead of the specialized FC network. With an iSCSI connection, your host serves as the initiator that communicates with a target, located in remote iSCSI storage systems. ESXi offers the following types of iSCSI connections:

  • Hardware iSCSI Your host connects to storage through a third-party adapter capable of offloading the iSCSI and network processing. Hardware adapters can be dependent and independent. This is shown on the left adapter on the picture below
  • Software iSCSI Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, your host needs only a standard network adapter for network connectivity. This is shown on the right adapter on the picture below

iSCSI

Network-attached Storage (NAS) (File Level Storage)

Stores virtual machine files on remote file servers accessed over a standard TCP/IP network. The NFS client built into ESXi uses Network File System (NFS) protocol version 3 to communicate with the NAS/NFS servers. For network connectivity, the host requires a standard network adapter.

NFS

Comparison of Storage Features

storage2

Predictive and Adaptive Schemes for Datastores

When setting up storage for ESXi systems, before creating VMFS datastores, you must decide on the size and number of LUNs to provision. You can experiment using the predictive scheme and the Adaptive Scheme

Predictive

  • Provision several LUNs with different storage characteristics.
  • Create a VMFS datastore on each LUN, labeling each datastore according to its characteristics.
  • Create virtual disks to contain the data for virtual machine applications in the VMFS datastores created on LUNs with the appropriate RAID level for the applications’ requirements.
  • Use disk shares to distinguish high-priority from low-priority virtual machines.

NOTE: Disk shares are relevant only within a given host. The shares assigned to virtual machines on one host have no effect on virtual machines on other hosts.

  • Run the applications to determine whether virtual machine performance is acceptable.

Adaptive

When setting up storage for ESXi hosts, before creating VMFS datastores, you must decide on the number and size of LUNS to provision. You can experiment using the adaptive scheme.

  • Provision a large LUN (RAID 1+0 or RAID 5), with write caching enabled.
  • Create a VMFS on that LUN.
  • Create four or five virtual disks on the VMFS.
  • Run the applications to determine whether disk performance is acceptable
  • If performance is acceptable, you can place additional virtual disks on the VMFS. If performance is not acceptable, create a new, large LUN, possibly with a different RAID level, and repeat the process. Use migration so that you do not lose virtual machines data when you recreate the LUN.

Tools for provisioning storage

  • vClient
  • Web Client
  • vmkfstools
  • SAN Vendor Tools

VMware Link

http://www.vmware.com/files/pdf/techpaper/Storage_Protocol_Comparison.pdf

 

Apply VMware storage Best Practices

best-practice

Datastore supported features

ds

VMware supported storage related functionality

ds3

Storage Best Practices

  • Always use the Vendors recommendations whether it be EMC, NetApp or HP etc
  • Document all configurations
  • In a well-planned virtual infrastructure implementation, a descriptive naming convention aids in identification and mapping through the multiple layers of virtualization from storage to the virtual machines. A simple and efficient naming convention also facilitates configuration of replication and disaster recovery processes.
  • Make sure your SAN fabric is redundant (Multi Path I/O)
  • Separate networks for storage array management and storage I/O. This concept applies to all storage protocols but is very pertinent to Ethernet-based deployments (NFS, iSCSI, FCoE). The separation can be physical (subnets) or logical (VLANs), but must exist.
  • If leveraging an IP-based storage protocol I/O (NFS or iSCSI), you might require more than a single IP address for the storage target. The determination is based on the capabilities of your networking hardware.
  • With IP-based storage protocols (NFS and iSCSI) you channel multiple Ethernet ports together. NetApp refers to this function as a VIF. It is recommended that you create LACP VIFs over multimode VIFs whenever possible.
  • Use CAT 6 cabling rather than CAT 5
  • Enable Flow-Control (should be set to receive on switches and
    transmit on iSCSI targets)
  • Enable spanning tree protocol with either RSTP or portfast
    enabled. Spanning Tree Protocol (STP) is a network protocol that makes sure of a loop-free topology for any bridged LAN
  • Configure jumbo frames end-to-end. 9000 rather than 1500 MTU
  • Ensure Ethernet switches have the proper amount of port
    buffers and other internals to support iSCSI and NFS traffic
    optimally
  • Use Link Aggregation for NFS
  • Maximum of 2 TCP sessions per Datastore for NFS (1 Control Session and 1 Data Session)
  • Ensure that each HBA is zoned correctly to both SPs if using FC
  • Create RAID LUNs according to the Applications vendors recommendation
  • Use Tiered storage to separate High Performance VMs from Lower performing VMs
  • Choose Virtual Disk formats as required. Eager Zeroed, Thick and Thin etc
  • Choose RDMs or VMFS formatted Datastores dependent on supportability and Aplication vendor and virtualisation vendor recommendation
  • Utilise VAAI (vStorage APIs for Array Integration) Supported by vSphere 5
  • No more than 15 VMs per Datastore
  • Extents are not generally recommended
  • Use De-duplication if you have the option. This will manage storage and maintain one copy of a file on the system
  • Choose the fastest storage ethernet or FC adaptor (Dependent on cost/budget etc)
  • Enable Storage I/O Control
  • VMware highly recommend that customers implement “single-initiator, multiple storage target” zones. This design offers an ideal balance of simplicity and availability with FC and FCoE deployments.
  • Whenever possible, it is recommended that you configure storage networks as a single network that does not route. This model helps to make sure of performance and provides a layer of data security.
  • Each VM creates a swap or pagefile that is typically 1.5 to 2 times the size of the amount of memory configured for each VM. Because this data is transient in nature, we can save a fair amount of storage and/or bandwidth capacity by removing this data from the datastore, which contains the production data. In order to accomplish this design, the VM’s swap or pagefile must be relocated to a second virtual disk stored in a separate datastore
  • It is the recommendation of NetApp, VMware, other storage vendors, and VMware partners that the partitions of VMs and the partitions of VMFS datastores are to be aligned to the blocks of the underlying storage array. You can find more information around VMFS and GOS file system alignment in the following documents from various vendors
  • Failure to align the file systems results in a significant increase in storage array I/O in order to meet the I/O requirements of the hosted VMs