Archive for July 2021

Network Partition on vSAN Cluster

The problem

We had an interesting problem with a 6 host vSAN cluster where 1 host seemed to be in a network partition according to Skyline Health. I thought it would be useful to document our troubleshooting steps as it can come in useful. Our problem wasn’t one of the usual network mis-configurations but in order to reach that conclusion we needed to perform some usual tests

We had removed this host from the vSAN cluster, the HA cluster and removed from the inventory and rebuilt it, then tried adding it back into the vSAN cluster with the other 5 hosts. It let us add the host to the current vSAN Sub-cluster UUID but then partitioned itself from the other 5 hosts.

Usual restart of hostd, vpxa, clomd and vsanmgmtd did not help.

Test 1 – Check each host’s vSAN details

Running the command below will tell you a lot of information on the problem host, in our case techlabesxi1

esxcli vsan cluster get

  • Enabled: True
  • Current Local Time: 2021-07-16T14:49:44Z
  • Local Node UUID: 70ed98a3-56b4-4g90-607c4cc0e809
  • Local Node Type: NORMAL
  • Local Node State: MASTER
  • Local Node Health State: HEALTHY
  • Sub-Cluster Master UUID: 70ed45ac-bcb1-c165-98404b745103
  • Sub-Cluster Backup UUID: 70av34ab-cd31-dc45-8734b32d104
  • Sub-Cluster UUID: 527645bc-789d-745e-577a68ba6d27
  • Sub-Cluster Member Entry Revision: 1
  • Sub-Cluster Member Count: 1
  • Sub-Cluster Member UUIDs: 98ab45c4-9640-8ed0-1034-503b4d875604
  • Sub-Cluster Member Hostnames: techlabesx01
  • Unicast Mode Enabled: true
  • Maintenance Mode State: OFF
  • Config Generation: b3289723-3bed-4df5-b34f-a5ed43b4542b43 2021-07-13T12:57:06.153

Straightaway we can see it is partitioned as the Sub-Cluster Member UUIDs should have the other 5 hosts’ UUID in and the Sub-Cluster Member Hostnames should have techlabesxi2, techlabesxi3, techlabesxi4, techlabesxi5, techlabesxi6. It has also made itself a MASTER where as we already have a master with the other partitioned vSAN cluster and there can’t be two masters in a cluster.

Master role:

  • A cluster should have only one host with the Master role. More than a single host with the Master role indicates a problem
  • The host with the Master role receives all CMMDS updates from all hosts in the cluster

Backup role:

  • The host with the Backup role assumes the Master role if the current Master fails
  • Normally, only one host has the Backup role

Agent role:

  • Hosts with the Agent role are members of the cluster
  • Hosts with the Agent role can assume the Backup role or the Master role as circumstances change
  • In clusters of four or more hosts, more than one host has the Agent role

Test 2 – Can each host ping the other one?

A lot of problems can be caused by the misconfiguration of the vsan vmkernel and/or other vmkernel ports however, this was not our issue. It is worth double checking everything though. IP addresses across the specific vmkernel ports must be in the same subnet.

Get the networking details from each host by using the below command. This will give you the full vmkernel networking details including the IP address, Subnet Mask, Gateway and Broadcast

esxcli network ip interface ipv4 address list

It may be necessary to test VMkernel network connectivity between ESXi hosts in your environment. From the problem host, we tried pinging the other hosts management network.

vmkping -I vmkX x.x.x.x

Where x.x.x.x is the hostname or IP address of the server that you want to ping and vmkX is the vmkernel interface to ping out of.

This was all successful

Test 3 – Check the unicast agent list and check the NodeUUIDs on each host

To get a check on what each host’s nodeUUID is, you can run

esxcli vsan cluster unicastagent list

Conclusion

We think what happened was that the non partitioned hosts had a reference to an old UUID for techlabesx01 due to us rebuilding the host. The host was removed from the vSAN cluster and the HA cluster and completely rebuilt. However, when we removed this host originally, the other hosts did not seem to update themselves once it had gone. So when we tried to add it back in, the other hosts didn’t recognise it.

The Fix

What we had to do was disable ClustermemberListUpdates on each host

esxcfg-advcfg -s 1 /VSAN/IgnoreClustermemberListupdates

Then remove the old unicastagent information from each host for techlabesx01

esxcli vsan cluster unicastagent remove -a 192.168.1.10

Then add the new unicastagent details to each host. You have seen where to get the host UUID in Step 1

esxcli vsan cluster unicastagent add -t node -u 70ed98a3-56b4-4g90-607c4cc0e809 -U true -a 192.168.1.10 -p 12321

This resolved the issue