Archive for October 2013

Windows Server 2012 Scale Out File Server

scales

Scale out File Server

Windows Server 2012 introduces a clustered Scale-Out File Server that provides more reliability by replicating file shares for application data. Scale-Out File Server varies from traditional file-server clustering technologies and isn’t recommended for scenarios with high-volume operations in which opening, closing, or renaming files occurs frequently.

In Windows Server 2012, the following clustered file servers are available:

  • Scale-Out File Server for application data (Scale-Out File Server)   This clustered file server is introduced in Windows Server 2012 and lets you store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are online on all nodes simultaneously. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active.
  • File Server for general use   This is the continuation of the clustered file server that has been supported in Windows Server since the introduction of Failover Clustering. This type of clustered file server, and thus all the shares associated with the clustered file server, is online on one node at a time. This is sometimes referred to as active-passive or dual-active. File shares associated with this type of clustered file server are called clustered file shares.

Key benefits provided by Scale-Out File Server in Windows Server 2012 include:

  • Active-Active file shares   All cluster nodes can accept and serve SMB client requests. By making the file share content accessible through all cluster nodes simultaneously, SMB 3.0 clusters and clients cooperate to provide transparent failover to alternative cluster nodes during planned maintenance and unplanned failures with service interruption.
  • Increased bandwidth   The maximum share bandwidth is the total bandwidth of all file server cluster nodes. Unlike previous versions of Windows Server, the total bandwidth is no longer constrained to the bandwidth of a single cluster node, but rather the capability of the backing storage system. You can increase the total bandwidth by adding nodes.
  • CHKDSK with zero downtime   CHKDSK in Windows Server 2012 is significantly enhanced to dramatically shorten the time a file system is offline for repair. Clustered shared volumes (CSVs) in Windows Server 2012 take this one step further and eliminates the offline phase. A CSV File System (CSVFS) can perform CHKDSK without impacting applications with open handles on the file system.
  • Clustered Shared Volume cache    CSVs in Windows Server 2012 introduces support for a read cache, which can significantly improve performance in certain scenarios, such as Virtual Desktop Infrastructure.
  • Simpler management   With Scale-Out File Servers, you create the Scale-Out File Server and then add the necessary CSVs and file shares. It is no longer necessary to create multiple clustered file servers, each with separate cluster disks, and then develop placement policies to ensure activity on each cluster node.

When to use Scale-Out File Server

You should not use Scale-Out File Server if your workload generates a high number of metadata operations, such as opening files, closing files, creating new files, or renaming existing files. A typical information worker would generate a lot of metadata operations. You should use a Scale-Out File Server if you are interested in the scalability and simplicity that it offers and you only require technologies that are supported with Scale-Out File Server. The following table shows the new capabilities in SMB 3.0, common Windows file systems, file server data management and applications, and if they are supported with Scale-Out File Server, or will require a traditional clustered file server:

Scale Out File Server

Review Failover Cluster Requirements

  • Scale-Out File Server is built on top of Failover Clustering so any requirements for Failover Clustering apply to Scale-Out File Server. You should have an understanding of Failover Clustering before deploying Scale-Out File Server
  • The storage configuration must be supported by Failover Clustering before you deploy Scale-Out File Server. You must successfully run the Cluster Validation Wizard before you add Scale-Out File Server.
  • Scale-Out File Server requires the use of Clustered Shared Volumes (CSVs). Since CSVs are not supported with Resilient File System, Scale-Out File Server cannot use Resilient File System.
  • Accessing a continuously available file share as a loopback share is not supported. For example, Microsoft SQL Server or Hyper-V storing their data files on SMB file shares must run on computers that are not a member of the file server cluster for the SMB file shares

Review Storage Requirements

  • Fibre Channel Storage Area Network You can use an existing fibre channel Storage Area Network as the storage subsystem for Scale-Out File Server.
  • iSCSI Storage Area Network You can use an existing iSCSI Storage Area Network as the storage subsystem for Scale-Out File Server.
  • Storage Spaces Storage Spaces is new in Windows Server 2012 and can also be used as the storage subsystem for Scale-Out File Server.
  • Clustered RAID controller A clustered RAID controller is new in Windows Server 2012 and can be used as the storage subsystem for Scale-Out File Server.

Review Networking Requirements

  • Ensure that the network adapter configurations are consistent across all of your nodes in Scale-Out File Server
  • Ensure that the network that includes the CSV redirection traffic has sufficient bandwidth
  • Use DNS dynamic update protocol for the cluster node name and all of the cluster nodes. You should ensure that the cluster node name is registered by using DNS dynamic update protocol. This should include the name of the Scale-Out File Server and the IP addresses of all of the network adapters in every cluster node on the client network.

Deploy Scale Out File Server

To take full advantage of Scale-Out File Server, all servers running the server applications that are using scale-out file shares should be running Windows Server 2012. If the server application is running on Windows Server 2008 or Windows Server 2008 R2, the servers will be able to connect to the scale-out file shares but will not take advantage of any of the new features. If the server application is running Windows Server 2003, the server will get access-denied error when connecting to the scale-out file share.

Prerequisites

  • First of all you will need 2 x Windows Server 2012 Servers built, updated and ready to work with for the Windows Failover Cluster
  • You will need 2 virtual NICs on each Windows 2012 Server. One for the Main Network and one for a Heartbeat network. Modify the provider order so the Main Network always comes first. In Network Connections hold down Alt and F then select Advanced and move your Main Network to the top of the binding order

scaleout40

  • I set up a iSCSI Target Disk from another server for my Scale Out File Server Share. Please see the previous blog for instructions on how to do this
  • I also set up an iSCSI Target from another server for my Quorum Disk. Please see the previous blog for instructions on how to do this
  • * Optional * You can also add 3 basic Virtual disks to your first server which are going to be set up as a Storage Space as detailed in the steps below and leave them as Online, Initialised and Unformatted in Disk Management on your Server. I wanted to see if these could be added into the Failover Cluster Pool as an experiment

scaleout48

  • When you have a default build of your servers before adding any roles and features I would take a snapshot so at least you can go back to where you were when everything was a fresh build and worked!! (Setting this up didn’t work too well for me the first time round and I ended up rebuilding servers and getting cross!)

Procedure

  • Log on to the first server as a member of the local Administrators group.
  • In the QUICK START section, click Add roles and features
  • On the Before you begin page of the Add Roles and Features Wizard, click Next.

Scaleout1

  • On the Select installation type page, click Role-based or feature-based installation, and then click Next.

Scaleout2

  • On the Select destination server page, select the appropriate server, and then click Next. The local server is selected by default.

Scaleout3

  • On the Select server roles page, expand File and Storage Services, expand File Services, and then select the File Server check box. Click Next.

Scaleout4

  • On the Select features page, select the Failover Clustering check box, and then click Next.

Scaleout5

  • Click OK to the pop up box

Scaleout6

  • On the Confirm installation selections page, click Install.

Scaleout7

  • Repeat the steps in this procedure for each server that will be added to the cluster
  • Next Click Tools, and then click Failover Cluster Manager
  • Under the Management heading, click Validate Configuration
  • On the Before You Begin page, click Next

Scaleout8

  • On the Select Servers or a Cluster page, in the Enter name box, type the FQDN of one of the servers that will be part of the cluster, and then click Add. Repeat this step for each server that will be in the cluster

Scaleout9

  • Click OK to see the chosen servers

Scaleout10

  • On the Testing Options page, ensure that the Run all tests (recommended) option is selected, and then click Next.

Scaleout11

  • On the Confirmation page, click Next.

Scaleout12

  • The Validation tests will now run

Scaleout13

  • On the Summary page, ensure that the Create the cluster now using the validated nodes check box is selected, and then click Finish. View the report to make sure you do not need to fix anything before proceeding. The Create Cluster Wizard appears.

Scaleout14

  • On the Before You Begin page, click Next

Scaleout15

  • On the Access Point for Administering the Cluster page, in the Cluster Name box, type a name for the cluster, and choose an IP Address then click Next.

Scaleout16

  • On the Confirmation page, click Next.
  • Untick Add all eligible storage to the cluster

Scaleout17

  • On the Summary page, click Finish.

Scaleout18

  • Right click on Disks in Failover Cluster Manager and select Add Disk

scaleout49

  • The 5GB Disk is my Quorum iSCSI Target Disk
  • The 15GB Disk is my Scale Out File Server iSCSI Target Disk
  • The 3 x 10GB Disks are the 3 basic unformatted virtual disks I added at the start of this procedure to my first server in order to try setting up a storage pool from within the Failover Cluster. Keep these unticked for now
  • You should now see the disks looking like the below

scaleout50

  • You should be now be able to change the Quorum setting from Node Majority to Node and Disk Majority as per the instructions below which is the recommended configuraton for a 2 Node Failover Cluster Server
  • Note the Quorum Disk cannot be a Cluster Shared Volume. Please click Quorum Disk to follow a link to mofe information
  • Right click on the Cluster name in Failover Cluster Manager and select More Actions > Configure Cluster Quorum Settings

scaleout42

  • Select Quorum Configuration Options

scaleout43

  • Select Quorum Witness

scaleout44

  • Configure Storage Witness to be your 5GB Drive

scaleout45

  • Confirmation

scaleout46

  • Summary

scaleout47

  • Next Go to Failover Cluster Manager > Storage > Pools and Select New Pool
  • Note that once physical disks have been added to a pool, they are no longer directly usable by the rest of Windows – they have been virtualized, that is, dedicated to the pool in their entirety

Scaleout21

  • Specify a Name for the Storage Pool and choose the Storage Subsystem that is available to the cluster and click Next
  • Select the Physical Disks for the Storage Pool
  • Note the disks should be Online, Initialised but unallocated. If you don’t see any disks, you need to go into Server Manager and delete the volumes

Scaleout23

  • Confirm Selections

Scaleout24

  • Click Create and you will see the wizard running through the tasks

Scaleout25

  • The next step is to create a Virtual Disk (storage space) that will be associated with a storage pool. In the Failover Cluster Manager, select the storage pool that will be supporting the Virtual Disk. Right-click and choose New Virtual Disk

Scaleout35

  • Select the Storage Pool

Scaleout27

  • Specifiy the Virtual Disk Name

Scaleout28

  • Select the Storage Layout. (Simple or Mirror; Parity is not supported in a Failover Cluster) and click Next

Scaleout29

  • Specifiy the Provisioning Type

Scaleout30

  • Specify the size of your virtual disk – I chose Maximum

Scaleout31

  • Check and Confirm and click Create

Scaleout32

  • View Results and make sure Create a Volume when this wizard closes is ticked

Scaleout33

  • The volume wizard opens

Scaleout34

  • Select the Cluster and your disk

Scaleout36

  • Specify the size of the volume

Scaleout37

  • Choose a drive letter

Scaleout38

  • Select File System Settings

Scaleout39

  • Confirm and Create

Scaleout40

  • You should now see this Virtual Disk Storage space as a drive in Windows
  • Open Failover Cluster Manager.
  • Right-click the cluster, and then click Configure Role.
  • On the Before You Begin page, click Next.
  • On the Select Role page, click File Server, and then click Next.
  • On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.

Scaleout43

  • On the Client Access Point page, in the Name box, type a NETBIOS name that will be used to access Scale-Out File Server, and then click Next
  • On the Confirmation page, confirm your settings, and then click Next.
  • On the Summary page, click Finish.

Scaleout47PNG

  • Click Start, type Failover Cluster, and then click Failover Cluster Manager
  • Expand the cluster, and then click Roles.
  • Right-click the file server role, and then click Add File Share.
  • On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  • On the Select the server and path for this share page, click the cluster shared volume, and then click Next.
  • On the Specify share name page, in the Share name box, type a name, and then click Next.
  • On the Configure share settings page, ensure that the Enable continuous availability check box is selected, and then click Next.
  • On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • If you are using this Scale-Out File Server file share for Hyper-V, all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators must be granted full control on the share and the file system.
  • If you are using Scale-Out File Server on Microsoft SQL Server, the SQL Server service account must be granted full control on the share and the file system
  • On the Confirm selections page, click Create.
  • On the View results page, click Close
  • Note: You should not use access-based enumeration on file shares for Scale-Out File Server because of the increased metadata traffic that is generated on the coordinator node.

Useful Links

http://technet.microsoft.com/en-us/library/jj612868.aspx

http://support.microsoft.com/kb/2813005/en-us

Changing the Blocksize of NTFS Drives and Iometer Testing

index

All file systems that Windows uses to organize the hard disk are based on cluster (allocation unit) size, which represents the smallest amount of disk space that can be allocated to hold a file. The smaller the cluster size, the more efficiently your disk stores information.

If you do not specify a cluster size for formatting, Windows XP Disk Management bases the cluster size on the size of the volume. Windows XP uses default values if you format a volume as NTFS by either of the following methods:

  • By using the format command from the command line without specifying a cluster size.
  • By formatting a volume in Disk Management without changing the Allocation Unit Size from Default in the Format dialog box.

The maximum default cluster size under Windows XP is 4 kilobytes (KB) because NTFS file compression is not possible on drives with a larger allocation size. The Format utility never uses clusters that are larger than 4 KB unless you specifically override that default either by using the /A: option for command-line formatting or by specifying a larger cluster size in the Format dialog box in Disk Management.

Blocksize

What’s the difference between doing a Quick Format and a Full Format?

http://support.microsoft.com/kb/302686

Procedure

  • To check what cluster size you are using already type the below line into a command prompt
  • fsutil fsinfo ntfsinfo :
  • You can see that this drive I am using has a cluster size of 32K. Normally Windows drives default to 4K

Blocksize

  • Remember that the following procedure will reformat your drive and wipe out any data on it
  • Type format : /fs:ntfs /a:64k
  • In this command,  is the drive you want to format, and /a:clustersize is the cluster size you want to assign to the volume: 2K, 4K, 8K, 16KB, 32KB, or 64KB. However, before you override the default cluster size for a volume, be sure to test the proposed modification via a benchmarking utility on a nonproduction machine that closely simulates the intended target.

Other Information

  • As a general rule there’s no dependency between the I/O size and NTFS cluster size in terms of performance. The NTFS cluster size affects the size of the file system structures which track where files are on the disk, and it also affects the size of the freespace bitmap. But files themselves are normally stored contiguously, so there’s no more effort required to read a 1MB file from the disk whether the cluster size is 4K or 64K.
  • In one case the file header says “the file starts at sector X and takes 256 clusters” an in the other case the headers says “the file starts at sector X and takes 16 clusters”. The system will need to perform the same number of reads on the file in either case no matter what the I/O size is. For example, if the I/O size is 16K then it will take 128 reads to get all the data regardless of the cluster size.
  • In a heavily fragmented file system the cluster size may start to affect performance, but in that case you should run a disk defragmenter such as Windows or DiskKeeper for example.
  • On a drive that performs a lot of file additions/deletions or file extensions then cluster size can have a performance impact because of the number of I/Os required to update the file system metadata (bigger clusters generally = less I/Os). But that’s independent of the I/O size used by the application – the I/Os to update the metadata are part of NTFS itself and aren’t something that the application performs.
  • If you’re hard drive is formatted NTFS then you can’t use NTFS compression if you raise the cluster size above 4,096 bytes (4KB)
  • Also keep in mind that increasing cluster size can potentially waste more hard drive space

Iometer Testing on different Block Sizes

The following 9 tests were carried out on one Windows Server 2008 R2 Server (4 vCPUs and 4GB RAM) which is used to page Insurance Modelling data onto a D Drive which is located on the local disk on a VMware Host Server. The disk is an IBM 300GB 10K 6Gps SAS 2.5” SFF Slim-HS HDD

The Tests

iometertesting

The Testing Spec in Iometer

Just adjusted for Disk Block Size which is the Transfer Request Size in the spec below

spec

Testing and Results

  • 4K Block Size on Disk
  • 4K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea -4k

  • 4K Block Size on Disk
  • 32K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea-32k

  • 4K Block Size on Disk
  • 64K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea-64k

  • 32K Block Size on Disk
  • 4K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea -32k-4k

  • 32K Block Size on Disk
  • 32K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea -32k-32k

  • 32K Block Size on Disk
  • 64K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea -32k-64k

  • 64K Block Size on Disk
  • 4K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea 64k-4k

  • 64K Block Size on Disk
  • 32K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea 64k-32k

  • 64K Block Size on Disk
  • 64K BLOCK SIZE 100% SEQUENTIAL 70% WRITE AND 30% READ

dev70-igloo-ea 64k-64k

The Results

results

The best thing to do seems to be to match up the expected data size with the disk block size in order to achieve the higher outputs. E.g 32K workloads with a 32K Block Size and 64K workloads with a 64K Block size.

Fujitsu Paper (Worth a read)

https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-basics-of-disk-io-performance-ww-en.pdf

Storage Spaces in Windows Server 2012

Storage

What are Storage Spaces?

A technology in Windows and Windows Server that enables you to virtualize storage by grouping industry-standard disks into storage pools, and then create virtual disks called storage spaces from the available capacity in the storage pools

Storage Spaces enables cost-effective, highly available, scalable, and flexible storage solutions for business-critical (virtual or physical) deployments. Storage Spaces delivers sophisticated storage virtualization capabilities, which empower customers to use industry-standard storage for single computer and scalable multi-node deployments. It is appropriate for a wide range of customers, including enterprise and cloud hosting companies, which use Windows Server for highly available storage that can cost-effectively grow with demand.

With Storage Spaces the Windows storage stack has been fundamentally enhanced to incorporate two new abstractions:

  • Storage pools. A collection of physical disks that enable you to aggregate disks, expand capacity in a flexible manner, and delegate administration.
  • Storage spaces. Virtual disks created from free space in a storage pool. Storage spaces have such attributes as resiliency level, storage tiers, fixed provisioning, and precise administrative control.

Storage Spaces is manageable through the Windows Storage Management API in Windows Management Instrumentation (WMI) and Windows PowerShell, and through the File and Storage Services role in Server Manager. Storage Spaces is completely integrated with failover clustering for high availability, and it is integrated with CSV for scale-out deployments

Important functionality

Storage Spaces includes the following features:

  • Storage pools.

Storage pools are the fundamental building blocks for Storage Spaces. Storage administrators are already familiar with this concept, obviating the need to learn a new model. They can flexibly create storage pools based on the needs of the deployment. For example, given a set of physical disks, an administrator can create one pool (by using all the available physical disks) or multiple pools (by dividing the physical disks as required). Furthermore, to maximize the value from storage hardware, the administrator can combine hard disks and solid-state drives (SSDs) in the same pool, using storage tiers to move frequently accessed portions of files to SSD storage, and using write-back caches to buffer small random writes to SSD storage. Pools can be expanded dynamically by simply adding additional drives, thereby seamlessly scaling to cope with unceasing data growth.

  • Resilient storage.

Storage Spaces provides three storage layouts (also known as resiliency types):

  • Mirror. Data is duplicated on two or three physical disks, increasing reliability, but reducing capacity. This storage layout requires at least two disks to protect you from a single disk failure, or at least five disks to protect you from two simultaneous disk failures.
  • Parity. Data and parity information are striped across physical disks, increasing reliability, but somewhat reducing capacity. This storage layout requires at least three disks to protect you from a single disk failure and at least seven disks to protect you from two disk failures.
  • Simple (no resiliency). Data is striped across physical disks, maximizing capacity and increasing throughput, but decreasing reliability. This storage layout requires at least one disk and does not protect you from a disk failure.

Additionally, Storage Spaces can automatically rebuild mirror and parity spaces in which a disk fails by using dedicated disks that are reserved for replacing failed disks (hot spares), or more rapidly by using spare capacity on other drives in the pool. Storage Spaces also includes background scrubbing and intelligent error correction to allow continuous service availability despite storage component failures. In the event of a power failure or cluster failover, the integrity of data is preserved so that recovery happens quickly and does not result in data loss.

  • Continuous availability.

Storage Spaces is fully integrated with failover clustering, which allows it to deliver continuously available service deployments. One or more pools can be clustered across multiple nodes within a single cluster. Storage spaces can then be instantiated on individual nodes, and the storage will seamlessly fail over to a different node when necessary (in response to failure conditions or due to load balancing). Integration with CSVs permits scale-out access to data.

  • Storage tiers.

Storage Spaces in Windows Server 2012 R2 Preview combines the best attributes of SSDs and hard disk drives (HDDs) by enabling the creation of virtual disks composed of two tiers of storage – an SSD tier for frequently accessed data, and a HDD tier for less-frequently accessed data. Storage Spaces transparently moves data at a sub-file level between the two tiers based on how frequently data is accessed. As a result, storage tiers can dramatically increase performance for the most used (“hot”) data by moving it to SSD storage, without sacrificing the ability to store large quantities of data on inexpensive HDDs.

  • Write-back cache.

Storage Spaces in Windows Server 2012 R2 Preview supports creating a write-back cache that uses a small amount of space on existing SSDs in the pool to buffer small random writes. Random writes, which often dominate common enterprise workloads, are directed to SSDs and later are written to HDDs.

  • Operational simplicity.

The Windows Storage Management API, WMI, and Windows PowerShell permit full scripting and remote management. Storage Spaces can also be easily managed through the File and Storage Services role in Server Manager. Storage Spaces also provides notifications when the amount of available capacity in a storage pool hits a configurable threshold.

  • Multitenancy.

Administration of storage pools can be controlled through access control lists (ACLs) and delegated on a per-pool basis, thereby supporting hosting scenarios that require tenant isolation. Storage Spaces follows the familiar Windows security model; therefore, it can be fully integrated with Active Directory Domain Services.

Requirements

Storage Spaces has the following requirements:

  • Windows Server 2012 R2 Preview, Windows Server 2012, Windows 8.1 Preview, or Windows 8.
  • Serial ATA (SATA) or Serial Attached SCSI (SAS) connected disks, optionally in a just-a-bunch-of-disks (JBOD) enclosure. RAID adapters, if used, must have all RAID functionality disabled and must not obscure any attached devices, including enclosure services provided by an attached JBOD
  • Consumers can use USB drives with Storage Spaces, though USB 3 drives are recommended to ensure a high level of performance. USB 2 drives will decrease performance – a single USB 2 hard drive can saturate the bandwidth available on the shared USB bus, limiting performance when multiple drives are attached to the same USB 2 controller. When using USB 2 drives, plug them directly into different USB controllers on your computer, do not use USB hubs, and add USB 2 drives to a separate storage pool used only for storage spaces that do not require a high level of performance
  • For shared-storage deployments on failover clusters: Two or more servers running Windows Server 2012 R2 Preview or Windows Server 2012, Requirements as specified for failover clustering and Cluster Shared Volumes (CSV) and SAS connected JBODs that comply with Windows Certification requirements

What are the recommended configuration limits?

In Windows Server 2012, the following are the recommended configuration limits:

  • Up to 160 physical disks in a storage pool; you can, however, have multiple pools of 160 disks.
  • Up to 480 TB of capacity in a single storage pool.
  • Up to 128 storage spaces in a single storage pool.
  • In a clustered configuration, up to four storage pools per cluster.

FAQs

http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx

Deploying Storage Spaces

In this example I will create a Storage Space from a Resource Pool containing 3 Disks

Storage Spaces4

Procedure 

  • Go to Server Manager > File and Storage Services > Storage Pools
  • Click Tasks and Select New Storage Pool
  • Note that once physical disks have been added to a pool, they are no longer directly usable by the rest of Windows – they have been virtualized, that is, dedicated to the pool in their entirety

Scaleout21

  • Specify a Name for the Storage Pool and choose the Storage Subsystem that is available

storagespaces3

  • Select the Physical Disks for the Storage PooL
  • Note the disks should be Online, Initialised but unallocated. If you don’t see any disks, you need to go into Server Manager and delete the volumes

Scaleout23

  • Confirm Selections

Scaleout24

  • Click Create and you will see the wizard running through the tasks

Scaleout25

  • The next step is to create a Virtual Disk (storage space) that will be associated with a storage pool. In the Failover Cluster Manager, select the storage pool that will be supporting the Virtual Disk. Right-click and choose New Virtual Disk

Scaleout35

  • Select the Storage Pool

Scaleout27

  • Specifiy the Virtual Disk Name

Scaleout28

  • Select the Storage Layout. (Simple or Mirror; Parity is not supported in a Failover Cluster) and click Next

Scaleout29

  • Specifiy the Provisioning Type

Scaleout30

  • Specify the size of your virtual disk – I chose Maximum

Scaleout31

  • Check and Confirm and click Create

Scaleout32

  • View Results and make sure Create a Volume when this wizard closes is ticked

Scaleout33

  • The volume wizard opens

Scaleout34

  • Select the Cluster and your disk

Scaleout36

  • Specify the size of the volume

Scaleout37

  • Choose a drive letter

Scaleout38

  • Select File System Settings

Scaleout39

  • Confirm and Create

Scaleout40

  • You should now see this Virtual Disk Storage space as a drive in Windows

 

Cluster Shared Volumes in Windows Server 2012

Cluster

What are Cluster Shared Volumes?

Cluster Shared Volumes (CSVs) in a Windows Server 2012 failover cluster allow multiple nodes in the cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume. With CSVs, clustered roles can fail over quickly from one node to another node without requiring a change in drive ownership, or dismounting and remounting a volume. CSVs also help simplify managing a potentially large number of LUNs in a failover cluster.

CSVs provide a general-purpose, clustered file system in Windows Server 2012, which is layered above NTFS. They are not restricted to specific clustered workloads. (In Windows Server 2008 R2, CSVs only supported the Hyper-V workload.) CSV applications include:

  • Clustered virtual hard disk (VHD) files for clustered Hyper-V virtual machines
  • Scale-out file shares to store application data for the Scale-Out File Server role. Examples of the application data for this role include Hyper-V virtual machine files and Microsoft SQL Server data

Other Details

  • At this time, CSVs do not support the Microsoft SQL Server clustered workload.
  • External authentication dependencies for CSVs have been removed
  • CSVs support the functional improvements in chkdsk
  • CSVs interoperate with antivirus and backup applications
  • CSVs are also now integrated with general storage features such as Bitlocker and Storage Spaces
  • Cluster Share Volumes (CSVs), system volumes, dynamic disks, and Resilient File System (ReFS) are not eligible for data deduplication

Benefits of using Cluster Shared Volumes in a failover cluster

Cluster Shared Volumes provides the following benefits in a failover cluster:

  • The configuration of clustered virtual machines is much simpler than before.
  • You can reduce the number of LUNs (disks) required for your virtual machines, instead of having to manage one LUN per virtual machine, which was previously the recommended configuration (because the LUN was the unit of failover). Many virtual machines can use a single LUN and can fail over without causing the other virtual machines on the same LUN to also fail over.
  • You can make better use of disk space, because you do not need to place each Virtual Hard Disk (VHD) file on a separate disk with extra free space set aside just for that VHD file. Instead, the free space on a Cluster Shared Volume can be used by any VHD file on that volume.
  • You can more easily track the paths to VHD files and other files used by virtual machines. You can specify the path names, instead of identifying disks by drive letters (limited to the number of letters in the alphabet) or identifiers called GUIDs (which are hard to use and remember). With Cluster Shared Volumes, the path appears to be on the system drive of the node, under the \ClusterStorage folder. However, this path is the same when viewed from any node in the cluster.
  • If you use a few Cluster Shared Volumes to create a configuration that supports many clustered virtual machines, you can perform validation more quickly than you could with a configuration that uses many LUNs to support many clustered virtual machines. With fewer LUNs, validation runs more quickly. (You perform validation by running the Validate a Configuration Wizard in the snap-in for failover clusters.)
  • There are no special hardware requirements beyond what is already required for storage in a failover cluster (although Cluster Shared Volumes require NTFS).
  • Resiliency is increased, because the cluster can respond correctly even if connectivity between one node and the SAN is interrupted, or part of a network is down. The cluster will re-route the Cluster Shared Volumes communication through an intact part of the SAN or network.

How to Configure a Clustered Storage Space in Windows Server 2012

Prerequisites

  • A minimum of three physical drives, with at least 4 gigabytes (GB) capacity each, are required to create a storage pool in a Failover Cluster.
  • The clustered storage pool MUST be comprised of Serial Attached SCSI (SAS) connected physical disks. Layering any form of storage subsystem, whether an internal RAID card or an external RAID box, regardless of being directly connected or connected via a storage fabric, is not supported.
  • All physical disks used to create a clustered pool must pass the Failover Cluster validation tests.
  • To run cluster validation tests: Open the Failover Cluster Manager interface (cluadmin.msc) and select the Validate Cluster options
  • Clustered storage spaces must use fixed provisioning.
  • Simple and mirror storage spaces are supported for use in Failover Cluster. Parity Spaces are not supported.
  • The physical disks used for a clustered pool must be dedicated to the pool. Boot disks should not be added to a clustered pool nor should a physical disk be shared among multiple clustered pools.
  • Storage spaces formatted with ReFS cannot be added to the Cluster Shared Volume (CSV)

Procedure

  • Go to Server Manager > File and Storage Services > Storage Pools and Select New Pool
  • Note that once physical disks have been added to a pool, they are no longer directly usable by the rest of Windows – they have been virtualized, that is, dedicated to the pool in their entirety

Scaleout21

  • Specify a Name for the Storage Pool and choose the Storage Subsystem that is available to the cluster and click Next
  • Select the Physical Disks for the Storage Pool
  • Note the disks should be Online, Initialised but unallocated. If you don’t see any disks, you need to go into Server Manager and delete the volumes

Scaleout23

  • Confirm Selections

Scaleout24

  • Click Create and you will see the wizard running through the tasks

Scaleout25

  • The next step is to create a Virtual Disk (storage space) that will be associated with a storage pool. In the Failover Cluster Manager, select the storage pool that will be supporting the Virtual Disk. Right-click and choose New Virtual Disk

Scaleout35

  • Select the Storage Pool

Scaleout27

  • Specifiy the Virtual Disk Name

Scaleout28

  • Select the Storage Layout. (Simple or Mirror; Parity is not supported in a Failover Cluster) and click Next

Scaleout29

  • Specifiy the Provisioning Type

Scaleout30

  • Specify the size of your virtual disk – I chose Maximum

Scaleout31

  • Check and Confirm and click Create

Scaleout32

  • View Results and make sure Create a Volume when this wizard closes is ticked

Scaleout33

  • The volume wizard opens

Scaleout34

  • Select the Cluster and your disk

Scaleout36

  • Specify the size of the volume

Scaleout37

  • Choose a drive letter

Scaleout38

  • Select File System Settings

Scaleout39

  • Confirm and Create

Scaleout40

  • You should now see this Virtual Disk Storage space as a drive in Windows
  • In Failover Cluster Manager, expand ClusterName, expand Storage, and then click Disks
  • Right-click a cluster disk, and then click Add to Cluster Shared Volumes. The Assigned To column changes to Cluster Shared Volume.

cluster

 

 

Installing and Configuring iSCSI Target Server on Windows Server 2012

iscsi

What is iSCSI Target Server?

iSCSI Target allows your Windows Server to share block storage remotely. iSCSI leverages the Ethernet network and does not require any specialized hardware. There is a brand new UI integrated with Server manager, along with 20+ cmdlets for easy management.

iSCSI Terms

  • iSCSI:

An industry standard protocol allow sharing block storage over the Ethernet. The server shares the storage is called iSCSI Target. The server (machine) consumes the storage is called iSCSI initiator. Typically, the iSCSI initiator is an application server. For example, iSCSI Target provides storage to a SQL server, the SQL server will be the iSCSI initiator in this deployment.

  • Target:

It is an object which allows the iSCSI initiator to make a connection. The Target keeps track of the initiators which are allowed to be connected to it. The Target also keeps track of the iSCSI virtual disks which are associated with it. Once the initiator establishes the connection to the Target, all the iSCSI virtual disks associated with the Target will be accessible by the initiator.

  • iSCSI Target Server:

The server runs the iSCSI Target. It is also the iSCSI Target role name in Windows Server 2012.

  • iSCSI virtual disk:

It also referred to as iSCSI LUN. It is the object which can be mounted by the iSCSI initiator. The iSCSI virtual disk is backed by the VHD file.

  • iSCSI connection:

iSCSI initiator makes a connection to the iSCSI Target by logging on to a Target. There could be multiple Targets on the iSCSI Target Server, each Target can be accessed by a defined list of initiators. Multiple initiators can make connections to the same Target. However, this type of configuration is only supported with clustering. Because when multiple initiators connects to the same Target, all the initiators can read/write to the same set of iSCSI virtual disks, if there is no clustering (or equivalent process) to govern the disk access, corruption will occur. With Clustering, only one machine is allowed to access the iSCSI virtual disk at one time.

  • IQN:

It is a unique identifier of the Target or Initiator. The Target IQN is shown when it is created on the Server. The initiator IQN can be found by typing a simple “iscsicli” cmd in the command window.

  • Loopback:

There are cases where you want to run the initiator and Target on the same machine; it is referred as “loopback”. In Windows Server 2012, it is a supported configuration. In loopback configuration, you can provide the local machine name to the initiator for discovery, and it will list all the Targets which the initiator can connect to. Once connected, the iSCSI virtual disk will be presented to the local machine as a new disk mounted. There will be performance impact to the IO, since it will travel through the iSCSI initiator and Target software stack when comparing to other local I/Os. One use case of this configuration is to have initiators writing data to the iSCSI virtual disk, then mount those disks on the Target server (using loopback) to check the data in read mode.

Instructions

The aim of this particular blog is to configure an iSCSI Target Disk which my Windows Server 2012 Failover Cluster can use as its Quorum Disk so we will be configuring a 5GB Quorum Disk which we will then present to the Failover Cluster Servers

  • Open Server Manager and click Add Roles and Features

ISCSI1

  • Choose Role based or Feature based installation

iSCSI2

  • Select Destination Server

iSCSI3

  • Select Server Roles > File and Storage Services > File and iSCSI Services > iSCSI Target Server

iSCSI4

  • Add Features that are required for iSCSI Target Server (None ticked here)

iSCSI5

  • Confirm Installation Selections

iSCSI6

  •  To complete iSCSI target server the configuration go to Server Manager , click File and Storage Services > iSCSI
  • Go to iSCSI Virtual disks and click “Launch the New Virtual Disk wizard to create a virtual disk” and walk through the Virtual Disks and targets creation
  • Select an iSCSI virtual disk location

iSCSI7

  • Specify iSCSI virtual disk name

iSCSI8

  • Specify iSCSI virtual disk size

iSCSI9

  • Assign iSCSI Target

iSCSI10

  • Specify Target Name. Underscores are not allowed but it will change them for you

iSCSI12

  • Specify Access Servers

iSCSI14

  • Select a method to identify the initiator

iSCSI13

  • Click Browse and type in the name of the servers which will need to access this virtual disk
  • I have added my 2 Windows Failover Cluster VMs which are called dacvsof001 and dacvsof002

iSCSI15

  • Enable Authentication

iSCSI16

  • Confirm Selections

iSCSI17

  • View Results

iSCSI18

  • Next we need to go to the first Failover Cluster Server dacvsof001 and add the disk
  • On dacvsof001, open Server Manager click Tools and select iSCSI Initiator. When you select this, you will get the following message. Click Yes

iSCSI19

  • Type the Target Server address in which is the server you created the Virtual Disk on and click Quick Connect

iSCSI20

  • You will the Target listed which is available for connection

iSCSI21

  • Click Done
  • Now open Disk Management to make sure that the disk is presented correctly

iSCSI22

  • Right click on this and select Online
  • Right click again and select Initialise
  • Create new Volume. I used Q for Quorum Disk

iSCSI23

  • Now go to the second Windows Failover Cluster Server and do exactly the same thing
  • Leave this disk online and initialised but not given a letter