VCAP6-DCV Deploy Objective 4.1

Objective 4.1 of VCAP6-Deploy exam covers following topics:

  • Configure a HA Cluster to Meet Resource and Availability Requirements
  • Configure Custom Isolation Response Settings
  • Configure VM Component Protection (VMCP)
  • Configure HA Redundant Settings:
    • Management Network
    • Datastore Heartbeats
    • Network Partitions
  • Configure HA related Alarms and Analyze a HA Cluster
  • Configure Fault Tolerance for Single/Multi-vCPU Virtual Machines

We will have a look on these topics one by one

                             Configure a HA Cluster to Meet Resource and Available Requirements

vSphere HA provides high availability for virtual machines by pooling the virtual machines and the hosts they reside on into a cluster. Hosts in the cluster are monitored and in the event of a failure, the virtual machines on a failed host are restarted on alternate hosts. When HA is configured on a cluster, an election process takes place and master/slave host is determined via election.

The host which is elected as master host communicates with vCenter and monitors the state of all protected VMs and other hosts in the cluster. When a host is added to the cluster an agent is uploaded to the host and configured to communicate with other agents in the cluster. A host in the cluster must be a master host or a slave host.

Master host is responsible for detecting the failures of slave host and restarting VM’s (that were running on failed host ) on remaining hosts in cluster. There are 3 types of host failures that can occur in a HA cluster: 

  • Host Failure: An Esxi host stops responding. Host does not respond to ICMP pings, agent stops responding and not issuing heartbeats.
  • Host Isolation: An Esxi host becomes network isolated. Host is still running but no longer observing traffic from HA agents on the management network and cannot access the cluster isolation address.
  • Host Partition: An Esxi host loses network connectivity with the master host.

To read more about vSphere HA, please read HA Deep-Dive article by Duncan Epping. 

How to enable HA?

HA can be enabled via 2 methods:

a: While creating a new cluster: You can enable HA on a cluster at the time of cluster creation. Checkmark the Turn On box for vSphere HA and configure host monitoring/admission control etc. 

ha-1.PNG

b: Post creation of cluster and adding Esxi hosts to cluster: You can skip turning on vSphere HA at the time of creating a new cluster and later when you have added hosts and configured networking/storage on all Esxi host.

To do so, login to vSphere Web Client and and select the Cluster > Manage > Settings > vSphere HA and click on Edit button.

ha-2.PNG

Check mark the “Turn on vSphere HA” box and configure rest of the options that are part of HA. I will discuss about these options in a bit detail later in this post.

ha-3.PNG

                                                  Configure Custom Isolation Response Settings

Host isolation response setting determines what happens when a host in a vSphere HA cluster loses its management network connections, but continues to run. Host network isolation occurs when a host is still running but it can no longer communicate with other hosts in the cluster and it cannot ping the configured isolation addresses.

When the HA agent on a host loses contact with the other hosts, it will ping the isolation addresses. If the pings fail, the host will declare itself isolated. Host isolation responses require that Host Monitoring Status is enabled.

HA Response Time 

In VMware vSphere 5.x and 6.x, if the agent is a master, then isolation is declared in 5 seconds. If it is a slave, isolation is declared in 30 seconds. This is the timeline for determining master/slave isolation

Isolation of a slave

  • T0 – Isolation of the host (slave)
  • T10s – Slave enters “election state”
  • T25s – Slave elects itself as master
  • T25s – Slave pings “isolation addresses”
  • T30s – Slave declares itself isolated and “triggers” isolation response

Isolation of a master

  • T0 – Isolation of the host (master)
  • T0 – Master pings “isolation addresses”
  • T5s – Master declares itself isolated and “triggers” isolation response

HA Response Types

When a host has been declared isolated, HA can take following actions on the basis of which setting is enabled on cluster:

  • Leave powered on: When a network isolation occurs on the host, the state of the virtual machines remain unchanged and the virtual machines on the isolated host continue to run even if the host can no longer communicate with other hosts in the cluster. This setting also reduces the chances of a false positive. By default, the isolated host leaves its virtual machines powered on.
  • Power off: When a network isolation occurs, all virtual machines are powered off and restarted on another ESXi host. It is a hard stop. A power off response is initiated on the fourteenth second and a restart is initiated on the fifteenth second.
  • Shut down: When a network isolation occurs, all virtual machines running on that host are shut down via VMware Tools and restarted on another ESXi host. If this is not successful within 5 minutes, a power off response type is executed.

To modify this setting, right-click your cluster, then click Edit Settings > VMware HA > Failure conditions and VM response > Response for Host isolation

ha-4.PNG

                                               Configure VM Component Protection (VMCP)

We all know that shared storage is a must for using almost all of the vSphere features and there can’t be a worst situation than losing your datastore. I have seen the cases where a datastore got disconnected abruptly from hosts and when it came back, the VM’s that were stored on the datastore went into read only mode and few even crashed in a way that it can’t be repaired back.

In vSphere 6.0 VMware introduced the concept of VM component protection which brought many enhancements for protecting VM’s against storage failures. There are two types of failures VMCP will respond to and those are:

Permanent Device Loss (PDL): A PDL event occurs when the storage array issues a SCSI sense code indicating that the device is unavailable. There can be many cause of PDL including:

  • Array misconfiguration.
  • Removing ESXi host from the array’s storage group.
  • A LUN failure on the storage array.
  • Incorrect zoning configuration that can cause the LUN to be unavailable.

In the PDL state, the storage array can communicate with the vSphere host and will issue SCSI sense codes to indicate the status of the device. Full list of such sense codes can be read from here

When a PDL state is detected, the host stops sending I/O requests to the storage as it considers the device is permanently unavailable and there is no reason to continuing issuing I/O to the device.

All Paths Down (APD): This situation is worse than PDL because in this situation, if the Esxi host cannot access the storage device, and storage array has not issued and PDL SCSI code, then the device is considered to be in an APD state. 

In this situation the Esxi host doesn’t know whether the device loss is permanent (due to the LUN being failed, any changes made in zoning config etc) or whether it’s temporary (due to a momentary network outage or any configuration error which is automatically reverted in 10 seconds).

In APD condition, the host continues to retry I/O commands to the storage device until the period known as the APD Timeout is reached. Once the APD Timeout is reached, the host begins to fast-fail any non-virtual machine I/O to the storage device. Due to the continued unsuccessful I/O, especially from processes like hostd, the ESXi host can eventually become unresponsive and unmanageable.

How to enable VM Component Protection (VMCP)?

To enable VMCP, login to vSphere Web Client and select the Cluster in question and click on Edit settings and navigate to vSphere HA > Host Hardware Monitoring > Protect Against Storage Connectivity Loss.

ha-6.PNG

For PDL responses, the following actions can be taken:

  • Disabled: No action will be taken to the affected VMs.
  • Issue events: No action will be taken against the affected VMs, however the administrator will be notified when a PDL event has occurred.
  • Power off and restart VMs: All affected VMs will be terminated on the host and vSphere HA will attempt to restart the VMs on hosts that still have connectivity to the storage device.

ha-7.PNG

Response for Datastore with All Paths Down (APD)

  • Disabled: No action is taken.
  • Issue events: No action is taken, but an alert is shown.
  • Power off and restart VMs (conservative): The slave nodes will communicate with the master in an attempt to find a healthy host on which VM’s can be failed over.
  • Power off and restart VMs (aggressive): The slave nodes will attempt to communicate with the HA master to find a suitable location to fail over VMs to. If communication with the master node isn’t possible, HA will attempt the failover anyway.

Delay for VM failover for APD

When an APD occurs a timer starts which runs for 140 seconds and this period is called APD Timeout. Once the APD timeout has been reached, VMCP waits for additional 3 minutes before taking action against the affected VMs. So the total time taken by VMCP before taking action against affected VMs is 5m:20s and this perioid is called VMCP Timeout (APD Timeout + Delay for VM Failover)

vmcp-timeline.png

 

Response for APD recovery after APD timeout

This setting will instruct vSphere HA to take a certain action if an APD event is cleared after the APD timeout was reached but before the Delay for VM failover has been reached.

  • Disabled: No action will be taken against the affected VMs.
  • Reset VMs: The VMs will be reset on the same host. (Hard reset)

VMCP Recovery Workflow

VMCP-workflow.png

                                                              Configure HA Redundancy Settings

Management Network

As a best practice, it is advised to use 2 vmnic’s for the management network on the vSwitch either in an Active/Active or Active/Standby configuration. If your management network is connected to only one NIC and when you enable HA on cluster, then you will observe following warning on all Esxi hosts:

ha-9.png

This warning can be suppressed by specifying an advance option das.ignoreRedundantNetWarning in HA configuration and set the value to true. 

Datastore Heartbeat 

Datastore heartbeats is used by vSphere HA to determine if a host has actually failed or just become network partitioned. If the master host is not observing any management traffic from a slave host but that host is continuously updating the timestamp in the heartbeat region, it means the host is just network partitioned and has not actually failed.

vSphere HA configuration needs atleast 2 datastores that are accessible by each host to be designated as heartbeat datastore. If all your Esxi host in a cluster have only one datastore, then you will see following warning on host “The number of heartbeat datastores for host is 1, which is less than required: 2”

If you wish you can suppress this warning by specifying an advance option das.ignoreInsufficientHbDatastore in HA configuration and set the value to true. Note that VMware do not advise to do so. 

If you wish to change the default value for heartbeat datastore from 2, then you can do so by specifying das.heartbeatDsPerHost in HA advance configuration and assigning a value in the range 2-5.

vSphere HA creates a directory at the root of each selected datastore called .vSphere-HA which is used for both datastore heartbeating.

ha-10.PNG

Note: A VSAN datastore can NOT be used for datastore heartbeating. If no other shared storage is available, datastore heartbeating cannot be used in the cluster.

                                        Configure HA related alarms and analyze a HA cluster

Alarms can be configured in vCenter server for HA related issues and when those alarms are triggered, we can configure vCenter server to send an email to the vSphere administrator’s so that issues can be rectified quickly.

The default vSphere HA alarms that are pre-configured in vCenter are:

  • Insufficient vSphere HA failover resources: Default alarm to alert when there are insufficient cluster resources for vSphere HA to guarantee failover
  • Cannot find vSphere HA master agent: Default alarm to alert when vCenter Server has been unable to connect to a vSphere HA master agent for a prolonged period 
  • vSphere HA host status: Default alarm to monitor health of a host as reported by vSphere HA
  • vSphere HA failover in progress: Default alarm to alert when vSphere HA is in the process of failing over virtual machines
  • vSphere HA virtual machine failover failed: Default alarm to alert when vSphere HA failed to failover a virtual machine
  • vSphere HA virtual machine monitoring action: Default alarm to alert when vSphere HA reset a virtual machine
  • vSphere HA virtual machine monitoring error:  Default alarm to alert when vSphere HA failed to reset a virtual machine

To configure these default alarms, login to vSphere Web Client and select the vCenter server from inventory and navigate to Monitor > Alarm Definitions and in the search box type vSphere HA

ha-12.PNG

Select an alarm from the list and click on Edit button to configure that alarm. Depending upon type of alarm that you are configuring, you can configure it to send a notification email or snmp trap when the alarm is triggered, or you can have it take one of these other actions as highlighted in below screenshot.

ha-13.PNG

Note: Actions that are available with an alarm changes from alarm to alarm.

vSphere HA issues can be analyzed by logging into vSphere Web Client and navigating to Host and Clusters > Cluster  > Monitor > vSphere HA > Configuration Issues

ha-11.PNG

                    Configure VMware Fault Tolerance for single and multi-vCPU virtual machines

One of the significant improvements that was introduced in vSphere 6.0 was Fault Tolerance (FT) support for mulit-vCPU VM’s. Before vSphere 6, FT can only protect a VM if it is configured with single CPU. Fault Tolerance is used for the most mission critical VMs and provides continuous availability by creating and maintaining a shadow VM that is identical in every aspects with the parent VM.

When a VM configured with FT fails, the shadow copy is made live instantaneously without any downtime. The protected VM is called the Primary and the replicated one is called the Secondary. To know more about how FT works, please read this article which I wrote sometimes back.

Prerequisites for using FT

  • Host must have a license for the feature (Enterprise Plus)
  • Host must not be in standby or maintenance mode
  • Virtual Machine must not be in a disconnected or orphaned state.

To support SMP-FT it is recommended to have a dedicated low latency 10Gb network. 

Unfortunately in my lab, I do not have the hardware (10 gig network) to configure dedicated portgroup for FT logging. I did not even found any HOL lab on FT as HOL uses nested architecture and some CPU features aren’t available to the HyperVisor. 

I hope you find this post informational. Feel free to share this on social media if it is worth sharing. Be sociable 🙂

Leave a Reply