Tag Archive for Clustering

Installing Microsoft Failover Clustering on VMware vSphere 4.1 with RDMs

This is a quick guide to installing Microsoft Failover Clustering on VMware vSphere 4.1 on 2 VMs across the 2 hosts

Pre-Requisites

  • Failover Clustering feature is available with Windows Server 2008/R2 Enterprise/Data Center editions. You don’t have this feature with the Standard edition of Windows Server 2008/R2
  • You also need a form of Shared Storage (FC)
  • To use the native disk support included in failover clustering, use basic disks, not dynamic disks and format as NTFS
  • Setup 1 Windows 2008 R2 Domain Controller Virtual Machine with Active Directory Services and a Domain
  • Setup 1 x Windows Server 2008 R2 Virtual Machine for Node 1 of the Windows Cluster with 2 NICs
  • Setup 1 x Windows Server 2008 R2 Virtual Machine for Node 2 of the Windows Cluster with 2 NICs

Instructions

  • Make sure all Virtual Machine are joined to the domain
  • Make sure all Virtual Machines are fully updated and patched with the latest S/W updates
  • You may need to adjust your Windows Firewall
  • On the first network adapter rename this as Public and on the second adapter, rename this as Private or MSCS Heartbeat
  • On the first network adapter, add the static IP address, Subnet Mask, Gateway and DNS
  • On the second network adapter, just add the IP Address and Subnet Mask
  • Go back to the original screen and untick the following boxes
  • Clear the Client for Microsoft Networks
  • Clear the File and Printer Sharing
  • Clear QOS Packet Scheduler
  • Clear Link Layer Toplogy checkboxes

Link Layer

  • Click Properties on Internet Protocol Version 4 (TCP/IPv4)

  • Click the DNS tab and clear the Register this Connection’s Addresses in DNS

DNS

  • Select the WINS tab and clear the Enable LMHOSTS Lookup checkbox

LMHOSTS

  • After you configured the IP addresses on every network adapter verify the order in which they are accessed. Go to Network Connections click Advanced > Advanced Settings and make sure that your LAN connection is the first one. If not click the up or down arrow to move the connection on top of the list. This is the network clients will use to connect to the services offered by the cluster.

BINDING

Adding Storage

I am assuming that your Storage Admin has already pre-created the LUNs that we are going to assign

  •  Go to vCenter and click Edit Settings
  • Select Add > Select Hard Disk

adddisks

  • Select Raw Device Mapping

rdm1

  • You will see that there are 4 LUNs available. This is because I want to set up Microsoft Failover Clustering with SQL Failover clustering and I need 4 disks for the Quorum, SQL Data, SQL Logs and MSDTC

RDM2

  • Select to store with the Virtual Machine. When an RDM is used, a small VMDK file is created on a VMFS Datastore and is a pointer to the RDM. This will be used when you configure the 2nd node in the cluster

rdm3a

  • Select Compatibility Mode. Physical Compatibility Mode is required for Microsoft Failover Cluster across hosts

rdm4

  • Review and Finish
  • Because you added the new disk as a new device on a new bus, you now have a new virtual SCSI controller which will default to the recommended type, LSI Logic SAS for Windows 2008 and Windows 2008 R2

rdm5

  • In Edit Settings, you need to click on the newly created SCSI Controller and select Physical for SCSI Bus Sharing. This is required for failover clustering to detect the storage as usable

rdm6

  • You now need to repeat this action for all RDMs you need to add
  • When you have finished adding all the RDMs you should see the following in Edit Settings for the VM

rdm7

  • You should now have the following setup to work with

RDM8

  • Next we need to add the RDMs to the second VM which is a slightly different procedure
  • Click Edit Settings on the 2nd VM and Add Hard Disk

adddisks

  • Select Choose and existing Virtual Disk

rdm9

  • To select the RDM Pointer file browse to the datastore and folder for the VM where you created the pointer file on the first VM

rdm10a

  • Select the same SCSI Virtual Device Node you setup on the first VM for the first RDM

rdm11

  • Review Settings and make sure everything is correct

RDM12

  • Next you will need to set the SCSI Bus Sharing Mode to Physical and verify that the type is LSI Logic SAS

rdm13

  • You now need to do the same for all the disks that have been added
  • Check everything looks correct and this is your storage setup

Configuring the storage on the VMs

  • Power on Node/VM1
  • Connect to Node1/VM1
  • Launch Server Manager and navigate to Disk Management under Storage
  • In Disk Manager you will see the new disks as being offline
  • Right click each disk and select Online and if necessary right click again and select Initialise Disk then select the MBR partition type
  • Create a simple volume on all 4 disks which should then look like the below

Disks

  • Next Power on and log into Node2/VM2
  • Open Disk Management and right click each disk and select Online. Once the Disks are online you will see the volume labels and status
  • If the disks have been assigned the next available drive letters then you will need to change the drive letters to match the letters you assigned on Node1/VM1
  • The disks will now look identical to Node1/VM1

Install Microsoft Failover Clustering

You will need to install Failover Clustering on both nodes as per below procedure

  • Open Server Manager > Add Features > Failover Clustering

cluster1

  • Click Install

cluster2

  • On the first Node1/VM1 click Start > Administrative Tools > Failover Cluster Manager
  • Click on Validate a Cluster

cluster3

  • Validation will run a variety of tests against your virtual hardware including the storage and networking to verify if the hardware is configured correctly to support a failover cluster. To pass all tests, both nodes must be online and the hardware must be configured correctly

cluster4

  • Select your 2 Nodes/VMs

cluster5

  • Click Next and Run all Tests

cluster6

  • Verify the server names and check the tests

cluster7

  • Click Run and the tests will begin

cluster8

  • Your configuration is now validated and you can check the reports for anything which is incorrect

cluster9

  • Click Create the cluster now using the validated nodes

cluster10

  • Type a name for your cluster
  • Type an IP Address for the Cluster IP

cluster11

  • Check details are all correct and click Create

cluster12

  • Finish and check everything is setup OK

cluster13

  • If you want to install SQL Server clustering, we will need to install a MSDTC Service
  • Go to Services and Applications – right click and select “Configure a service or application

CLUSTER14

  • Click Next and select DTC

cluster15

  • Put in a name and IP Address for the DTC

CLUSTER16

  • Click Next and select the storage you created for the MSDTC

cluster17

  • Click Next and Review the confirmation

cluster18

  • Click Next and the MSDTC Service will be created

cluster19

  • Finish and make sure everything was setup successfully

cluster20

  •  Congratulations, you have now set up your Windows Failover Cluster
  • Check that your Windows Cluster IP and your MSDTC IP are listed in DNS

To set up SQL Server Failover Clustering

http://www.electricmonk.org.uk/2012/11/13/sql-server-2008-clustering/

SQL Server 2008 Clustering

This post follows on from the previous post regarding the setup of Microsoft Windows Clusters which will be required before you can set up Microsoft SQL Clustering

Pre Requisites

  • You must have installed Microsoft .NET Framework on both nodes in the cluster – On the Windows Server, you can go to Add Features and select Microsoft .NET 3.5 SP1
  • Create all necessary SQL Server Active Directory Groups for the relevant SQL Server Services (SQL Agent, DB Agent, Analysis Services) Note that Reporting Services/Integration Services are not cluster aware but you can install it to be used with just this server
  • Make sure all patching and software updates are current
  • You must be running Microsoft Enterprise/Datacenter edition
  • Please see the table below for an example of the amount of NICs and different subnets required for a 2 Node Windows/SQL Cluster

Number of Nodes supported by SQL Server versions

Instructions for Node 1

  • On Node 1, connect the SQL Server 2005/2008 ISO or installer
  • Click Setup and choose New SQL Server Failover Cluster Installation

  • Select to Install Setup Support Rules

  • If you get a Network Binding error and your bindings all look correct with the LAN NIC at the top correctly then please try modifying the registry. It looks like sometimes the system takes the Virtual Cluster adapter to be the top binding but this is not visible from Network Connections Window when you go into Advanced settings
  • Drill down to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Linkage and open up the Bind value and move the LAN ID to the top

  • Setup Support Files
  • Select Product Key
  • On Feature, Select Database Engine Services, Replication Services and Analysis Services
  • Note that Reporting Services/Integration Services are not cluster aware but you can install it to be used with just this server
  • On Instance Configuration, you need to enter a SQL Server Network Name like SQLCLUSTER
  • Keep the default instance or choose a new instance
  • You can change the Instance Root Directory if you wish also *NEED TO CHECK THIS
  • Click Next
  • On the Cluster Resource Group, you can keep the settings
  • In the Cluster Disk Selection, Select the disks you want to use for SQL DB and SQL Logs (Make sure both are ticked!!!)
  • Next the Cluster Network Configuration

  • Untick DHCP and provide a new IP Address and Subnet
  • On Cluster Security Policy, keep Use Service SSIDs

  • On Service Accounts, please fill in the AD accounts you previously created for SQL Server Agent and SQL Server DB Engine
  • Check Collation is as you want it – Usually Latin1_General_C1_AS
  • In Database Engine Configuration, select Mixed mode and add a password for sa and add the current user
  • Click the Data Directories Tab and configure these paths as appropriate

  • Enable Filestream if you want
  • On Error and Usage Configuration
  • Next
  • Next
  • Install

Instructions for Node 2

  • Choose Add Node to a SQL Server Failover Cluster

  • Next
  • Put in Product Key
  • Accept Licensing
  • Install Setup Support Files
  • Check Setup Support Rules
  • On the Cluster Node Configuration, check this is all correct

  • Enter password for SQL Server Engine and SQL Server Agent account
  • Click Next on Error Reporting
  • Click Next on Add Node Rules
  • Click Install
  • Complete and Close

Testing Failover

  • Log into the SQL Server and open SQL Management Studio. Test a query against your DB
  • Open Failover Cluster Manager
  • Go to Services and Applications
  • Click on SQL Server (Cluster Name)

  • Select Move this Application or Service to another node.
  • Once this has transferred, do the same query test on the second server and make sure everything works as expected.
  • If so then Failover is working correctly
  • Go to vCenter and create a new HA rule keeping these DB Servers running on separate hosts for the ultimate in failover 🙂

Note: If you find you want to clear the Event Logs post Installation and have a fresh start, then you will need to clear the logs from both servers then close Failover Cluster Manager and restart it.

Useful Articles

capture

 

 

Testing Microsoft Failover Clustering on VMware Workstation 8 or ESXi4/5 Standalone

VMware Workstation and vSphere ESXi (Free Version) are the ultimate flexible tools for testing out solutions such as Microsoft Failover Clustering. I wanted to test this out myself before implementing this on a live VMware environment so I have posted some instructions on how to set this up step by step.

Pre-Requisites

Note: This test environment should not be what you use in a Production environment. It is to give you a way of being able to work and play with Windows Clustering

Note: Failover Clustering feature is available with Windows Server 2008/R2 Enterprise/Data Center editions. You don’t have this feature with the Standard edition of Windows Server 2008/R2.

Note: You also need a form of Shared Storage (FC or iSCSI) There are very good free solutions by Solarwinds and Freenas as per the links below you can download and use for testing

Note: To use the native disk support included in failover clustering, use basic disks, not dynamic disks and format as NTFS

  • VMware Workstation 8 (If you are a VCP 4 or 5, you will have a free VMware Workstation license)
  • Setup 1 Windows 2008 R2 Domain Controller Virtual Machine with Active Directory Services and a Domain
  • Setup 1 x Windows Server 2008 R2 Virtual Machine for Node 1 of the Windows Cluster with 2 NICs
  • Setup 1 x Windows Server 2008 R2 Virtual Machine for Node 2 of the Windows Cluster with 2 NICs
  • 1 x Freenas Virtual Machine (Free Storage Virtual Machine in ISO format) We will not be using this in this demo but it is also a very good free solution for creating Shared Storage for Testing
  • http://www.freenas.org/
  • 1 x Free Starwind ISCSI SAN edition (Requires a corporate email registration) This is what we will be using in this demo (Version 6.0.4837)
  • http://www.starwindsoftware.com/starwind-free

Instructions

  • Make sure all Virtual Machine are joined to the domain
  • Make sure all Virtual Machines are fully updated and patched with the latest S/W updates
  • On the first network adapter rename this as Public and on the second adapter, rename this as Private or MSCS Heartbeat
  • On the first network adapter, add the static IP address, Subnet Mask, Gateway and DNS
  • On the second network adapter, just add the IP Address and Subnet Mask
  • Go back to the original screen and untick the following boxes
  • Clear the Client for Microsoft Networks
  • Clear the File and Printer Sharing
  • Clear QOS Packet Scheduler
  • Clear Link Layer Toplogy checkboxes

Link Layer

  • Click Properties on Internet Protocol Version 4 (TCP/IPv4)

  • Click the DNS tab and clear the Register this Connection’s Addresses in DNS

DNS

  • Select the WINS tab and clear the Enable LMHOSTS Lookup checkbox

LMHOSTS

  • After you configured the IP addresses on every network adapter verify the order in which they are accessed. Go to Network Connections click Advanced > Advanced Settings and make sure that your LAN connection is the first one. If not click the up or down arrow to move the connection on top of the list. This is the network clients will use to connect to the services offered by the cluster.

BINDING

  • Make sure you note down all IP Addresses as you go along. This is always handy
  • Disable the Domain Firewall on both Windows Servers
  • At this point, you can choose whether to use Freenas or Starwind. I will be continuing with Starwind but you can follow the Freenas instructions as per below link if you are more familiar with this
  • http://www.sysprobs.com/nas-vmware-workstation-iscsi-target
  • Install the Starwind Software on your Domain Controller
  • Highlight Starwind Server and select Add Host which will be the DC
  • Click General and Connect
  • Put in root and the Password is starwind
  • Go to Registration – Load License which you should have saved from your download
  • Select Devices in the left and Pane, right click and Add a new device to the target. The wizard opens as below. Select Virtual Hard Disk

  • Click Next and Select Image File Device

  • Click Next and Create new Virtual Disk

  • Select the radio button at the end of the New Virtual Disk Location

  • The below window will open

  • Create a new folder called StarwindStorage

  • Type in the first name quorum.img so it all looks like the bottom

  • Edit the size to what you want

  • Next

  • Next

  • Next, type an alias name > Next

  • Next

  • Finish

  • Do the exact procedure above for SQLData
  • Do the exact procedure above for SQLLogs
  • Do the exact procedure above for MSDTC
  • You need to add MSDTC to every Windows Cluster you build. It ensures operations requiring enlisting resources such as COM+ can work in a cluster. It is recommended that you configure MSDTC on a different disk to everything
  • The Quorum Database contains all the configuration information for the cluster
  • Go on to your first Windows Server
  • Click Start > Administration Tools > iSCSI Initiator. If you get the message below, just click Yes

  • Click the Discovery Tab > Add Portal
  • Add the Domain Controller as a Target Portal
  • Click the Targets Tab and you will see the 4 disks there
  • Login to each disk clicking Automatically Restore this Connection
  • Go to Computer Management > Click Disk Management
  • Make all 4 disks online and initialized
  • Right click on each select create Simple Volume
  • Go to the second Windows Server
  • Click Start > Administration Tools > iSCSI Initiator
  • Click the Discovery Tab > Add Portal
  • Add the Domain Controller as a Target Portal
  • Click the Targets Tab and you will see the 4 disks there
  • Login to each disk clicking Automatically Restore this Connection
  • Go to Computer Management > Click Disk Management
  • Don’t bring the disks online, don’t do anything else to the disks on the second server
  • Go back to the first Windows Server
  • Select Server Manager > Add Features > Failover Clustering
  • Go back to the second Windows Server
  • Select Server Manager > Add Features > Failover Clustering

  • Once installed on the second server, go back to the first Windows Server
  • To open Failover Clustering, click on Start > Administrative Tools > Failover Cluster Manager

  •  Click on Validate a configuration under management.
  • When you click on Validate a Configuration, you will need to browse and add the Cluster nodes, these are the 2 Windows servers that will be part of the cluster, then click Next
  • Select Run all tests and click Next

  • Click Next
  • Review the validation report, as your configuration might have few issues with it and needs to be addresses before setting up your cluster

  • Your  configuration is now validated and you are ready to setup your cluster.
  • Click on the second option, Create a Cluster, the wizard will launch, read it and then click Next

  • You need to add the names of the servers you want to have in the cluster

  • After the servers are selected, you need to type a Cluster name and IP for your Cluster
  • Put this cluster name and IP in your DNS server

  •  Next
  • Next
  • Finish
  • Open Failover Cluster Manager and you will see your nodes and setting inside the MMC. Here you can configure your cluster, add new nodes, remove nodes, add more disk storage and any other administration
  • If you want to install SQL Server clustering, we will need to install a MSDTC Service
  • Go to Services and Applications – right click and select “Configure a service or application

  • Select the DTC and click next
  • On the Client Access Point page, enter a Name and an IP address to be used by the DTC, and then click Next.
  • Put the DTC Name and IP Address in your DNS Server

  • If you find that it has taken the wrong disk for your Quorum Disk, you will need to do the following
  • Right click on the cluster and select More Actions
  • Configure Cluster Quorum Settings
  • Click Next
  • On the next Page – Select Quorum Configuration
  • Keep Node and Disk Majority

  • On Configure the Storage Witness, select the drive that should have been the Quorum drive
  • Now you should be completely set up for Windows Clustering. Have a look through all the settings to familiarise yourself with everything.

Next Post

My next post will contain Instructions on on how to setup SQL Server clustering. You should have this environment set up first before following on with installing SQL Server.

YouTube Videos

These videos are extremely useful as quidance to this process

http://www.youtube.com/watch?v=7onR2BjTVr8&feature=relmfu

http://www.youtube.com/watch?v=iJy-OBHtMZE&feature=relmfu

http://www.youtube.com/watch?v=noJp_Npt7UM&feature=relmfu

http://www.youtube.com/watch?v=a27bp_Hvz7U&feature=relmfu

http://www.youtube.com/watch?v=B2u2l-3jO7M&feature=relmfu

http://www.youtube.com/watch?v=TPtcdbbnGFA&feature=relmfu

http://www.youtube.com/watch?v=GNihwqv8SwE&feature=relmfu

http://www.youtube.com/watch?v=0i4YGr0QxKg&feature=relmfu

http://www.youtube.com/watch?v=2xsKvSTaVgA&feature=relmfu

http://www.youtube.com/watch?v=Erx1esoTNfc&feature=relmfu

VMware vSphere support for Microsoft clustering solutions on VMware

Links

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959

https://www.vmware.com/pdf/vsphere4/r41/vsp_41_mscs.pdf

Failover Clusters in Windows Server 2008 – Quorums

What is a cluster?

A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

Are there any special considerations?

Microsoft supports a failover cluster solution only if all the hardware components are marked as “Certified for Windows Server 2008 R2.” In addition, the complete configuration (servers, network, and storage) must pass all tests in the Validate a Configuration wizard, which is included in the Failover Cluster Manager snap-in.

Note that this policy differs from the support policy for server clusters in Windows Server 2003, which required the entire cluster solution to be listed in the Windows Server Catalog under Cluster Solutions.

Cluster validation is intended to catch hardware or configuration problems before the cluster goes into production. Cluster validation helps to ensure that the solution you are about to deploy is truly dependable. Cluster validation can also be performed on configured failover clusters as a diagnostic tool.

Step by Step Guide

  • Run the cluster validation wizard for a failover cluster
  • If the cluster does not yet exist, choose the servers that you want to include in the cluster, and make sure you have installed the failover cluster feature on those servers. To install the feature, on a server running Windows Server 2008 or Windows Server 2008 R2, click Start, click Administrative Tools, click Server Manager, and under Features Summary, click Add Features. Use the Add Features wizard to add the Failover Clustering feature.
  • If the cluster already exists, make sure that you know the name of the cluster or a node in the cluster
  • For a planned cluster with all hardware connected: Run all tests.
  • For a planned cluster with parts of the hardware connected: Run System Configuration tests, Inventory tests, and tests that apply to the hardware that is connected (that is, Network tests if the network is connected or Storage tests if the storage is connected).
  • For a cluster to which you plan to add a server: Run all tests. Before you run them, be sure to connect the networks and storage for all servers that you plan to have in the cluster.
  • For troubleshooting an existing cluster: If you are troubleshooting an existing cluster, you might run all tests, although you could run only the tests that relate to the apparent issue.
  • In the failover cluster snap-in, in the console tree, make sure Failover Cluster Management is selected and then, under Management, click Validate a Configuration.

  • Follow the instructions in the wizard to specify the servers and the tests, and run the tests.
  • Note that when you run the cluster validation wizard on unclustered servers, you must enter the names of all the servers you want to test, not just one.
  • The Summary page appears after the tests run.
  • While still on the Summary page, click View Reportto view the test results.To view the results of the tests after you close the wizard, see SystemRoot\Cluster\Reports\Validation Report date and time.html where SystemRoot is the folder in which the operating system is installed (for example, C:\Windows)

Error Chart

Configuring the Quorum in a Failover Cluster

In simple terms, the quorum for a cluster is the number of elements that must be online for that cluster to continue running. In effect, each element can cast one “vote” to determine whether the cluster continues running. The voting elements are nodes or, in some cases, a disk witness or file share witness. Each voting element (with the exception of a file share witness) contains a copy of the cluster configuration, and the Cluster service works to keep all copies synchronized at all times

Note that the full function of a cluster depends not just on quorum, but on the capacity of each node to support the services and applications that fail over to that node. For example, a cluster that has five nodes could still have quorum after two nodes fail, but each remaining cluster node would continue serving clients only if it had enough capacity to support the services and applications that failed over to it.

Why Quorum is necessary

When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network, but might not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.

To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5 are a minority and stop running as a cluster, which prevents the problems of a “split” situation. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.

Overview of the Quorum Nodes

There have been significant improvements to the quorum model in Windows Server 2008. In Windows Server 2003, almost all server clusters used a disk in cluster storage (the “quorum resource”) as the quorum. If a node could communicate with the specified disk, the node could function as a part of a cluster, and otherwise it could not. This made the quorum resource a potential single point of failure. In Windows Server 2008, a majority of ‘votes’ is what determines whether a cluster achieves quorum. Nodes can vote, and where appropriate, either a disk in cluster storage (called a “disk witness”) or a file share (called a “file share witness”) can vote. There is also a quorum mode called No Majority: Disk Only which functions like the disk-based quorum in Windows Server 2003. Aside from that mode, there is no single point of failure with the quorum modes, since what matters is the number of votes, not whether a particular element is available to vote.

This new quorum model is flexible and you can choose the mode best suited to your cluster.

Important: In most situations, it is best to use the quorum mode selected by the cluster software. If you run the quorum configuration wizard, the quorum mode that the wizard lists as “recommended” is the quorum mode chosen by the cluster software. We only recommend changing the quorum configuration if you have determined that the change is appropriate for your cluster.

There are four quorum modes:

  • Node Majority: Each node that is available and in communication can vote. The cluster functions only with a majority of the votes, that is, more than half.
  • Node and Disk Majority: Each node plus a designated disk in the cluster storage (the “disk witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
  • Node and File Share Majority: Each node plus a designated file share created by the administrator (the “file share witness”) can vote, whenever they are available and in communication. The cluster functions only with a majority of the votes, that is, more than half.
  • No Majority: Disk Only: The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage.

Choosing the Quorum Mode for a particular cluster

Description of Cluster

Quorum Recommendation

Odd number of nodes

Node Majority

Even number of nodes (but not a multi-site cluster)

Node and Disk Majority

Even number of nodes, multi-site cluster

Node and File Share Majority

Even number of nodes, no shared storage

Node and File Share Majority

Node Majority

The following diagram shows Node Majority used (as recommended) for a cluster with an odd number of nodes.In this mode, each node gets one vote. In certain circumstances, you might want to install a hotfix that lets you select which nodes will have votes. This can be useful with certain multi-site clusters, for example, where you want one site to have more votes than other sites in a disaster recovery situation

Node and Disk Majority

The following diagram shows Node and Disk Majority used (as recommended) for a cluster with an even number of nodes. Each node can vote, as can the disk witness.

  • Use a small Logical Unit Number (LUN) that is at least 512 MB in size.
  • Choose a basic disk with a single volume.
  • Make sure that the LUN is dedicated to the disk witness. It must not contain any other user or application data.
  • Choose whether to assign a drive letter to the LUN based on the needs of your cluster. The LUN does not have to have a drive letter (to conserve drive letters for applications).
  • As with other LUNs that are to be used by the cluster, you must add the LUN to the set of disks that the cluster can use. For more information, see http://go.microsoft.com/fwlink/?LinkId=114539.
  • Make sure that the LUN has been verified with the Validate a Configuration Wizard.
  • We recommend that you configure the LUN with hardware RAID for fault tolerance.
  • In most situations, do not back up the disk witness or the data on it. Backing up the disk witness can add to the input/output (I/O) activity on the disk and decrease its performance, which could potentially cause it to fail.
  • We recommend that you avoid all antivirus scanning on the disk witness.
  • Format the LUN with the NTFS file system.

If there is a disk witness configured, but bringing that disk online will not achieve quorum, then it remains offline. If bringing that disk online will achieve quorum, then it is brought online by the cluster software

Node and File Share Majority

The following diagram shows Node and File Share Majority used (as recommended) for a cluster with an even number of nodes and a situation where having a file share witness works better than having a disk witness. Each node can vote, as can the file share witness.

  • Use a Server Message Block (SMB) share on a Windows Server 2003 or Windows Server 2008 file server.
  • Make sure that the file share has a minimum of 5 MB of free space.
  • Make sure that the file share is dedicated to the cluster and is not used in other ways (including storage of user or application data).
  • Do not place the share on a node that is a member of this cluster or will become a member of this cluster in the future.
  • You can place the share on a file server that has multiple file shares servicing different purposes. This may include multiple file share witnesses, each one a dedicated share. You can even place the share on a clustered file server (in a different cluster), which would typically be a clustered file server containing multiple file shares servicing different purposes.
  • For a multi-site cluster, you can co-locate the external file share at one of the sites where a node or nodes are located. However, we recommend that you configure the external share in a separate third site.
  • Place the file share on a server that is a member of a domain, in the same forest as the cluster nodes.
  • For the folder that the file share uses, make sure that the administrator has Full Control share and NTFS permissions.
  • Do not use a file share that is part of a Distributed File System (DFS) Namespace

No Majority – Disk only

The following illustration shows how a cluster that uses the disk as the only determiner of quorum can run even if only one node is available and in communication with the quorum disk. It also shows how the cluster cannot run if the quorum disk is not available (single point of failure). For this cluster, which has an odd number of nodes, Node Majority is the recommended quorum mode.

  • Use a small Logical Unit Number (LUN) that is at least 512 MB in size.
  • Choose a basic disk with a single volume.
  • Make sure that the LUN is dedicated to the disk witness. It must not contain any other user or application data.
  • Choose whether to assign a drive letter to the LUN based on the needs of your cluster. The LUN does not have to have a drive letter (to conserve drive letters for applications).
  • As with other LUNs that are to be used by the cluster, you must add the LUN to the set of disks that the cluster can use. For more information, see http://go.microsoft.com/fwlink/?LinkId=114539.
  • Make sure that the LUN has been verified with the Validate a Configuration Wizard.
  • We recommend that you configure the LUN with hardware RAID for fault tolerance.
  • In most situations, do not back up the disk witness or the data on it. Backing up the disk witness can add to the input/output (I/O) activity on the disk and decrease its performance, which could potentially cause it to fail.
  • We recommend that you avoid all antivirus scanning on the disk witness.
  • Format the LUN with the NTFS file system.

If there is a disk witness configured, but bringing that disk online will not achieve quorum, then it remains offline. If bringing that disk online will achieve quorum, then it is brought online by the cluster software

Viewing the Quorum Configuration

  • To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Management (in Windows Server 2008) or Failover Cluster Manager (in Windows Server 2008 R2).If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.
  • In the console tree, if the cluster that you want to view is not displayed, right-click Failover Cluster Management or Failover Cluster Manager, click Manage a Cluster, and then select the cluster you want to view
  • In the center pane, find Quorum Configuration, and view the description
  • In the following example, the quorum mode is Node and Disk Majority and the disk witness is Cluster Disk 2.