Disclaimer: Reference to most of the concepts explained in this post is from Duncan’s and Frank’s book “Clustering Deep Dive”. I am just summarizing important concepts which I learned from that book. If this post is helpful for you, you can say thanks to @DuncanYB and @FrankDenneman for writing this amazing book.

Now lets begin with learning.

Introduction to DPM (Distributed Power Management)

DPM provides power savings by dynamically sizing the cluster capacity to match the virtual machine resource demand. DPM dynamically consolidates virtual machines onto fewer ESXi hosts and powers down excess ESXi hosts during periods of low resource utilization. If the resource demand increases, ESXi hosts are powered back on and the virtual machines are redistributed among all available ESXi hosts in the cluster.

DPM Automation Levels: 3 Automations levels are there:

1) Off: DPM is not enabled on cluster and no power-off recommendations will be issued.

2) Manual: DPM will provide recommendations for powering-off Esxi host but administrator has to manually apply those recommendations

3) Automatic: A power recommendation will be generated and will be executed automatically; no user intervention required

By default when DPM is enabled on cluster, all the Esxi hosts inherits the same automation level of DPM which was defined at cluster level. But we can manually set DPM automation level/Esxi hosts also. DPM automation level defined at Esxi host level will override cluster level automation setting.

Note: Templates registered on Esxi hosts are not moved when DPM is set to automatic mode and power-off recommendation has been issued for that Esxi host.

How DPM is working in background to provide power-off recommendations?
DPM calculates Target Resource Utilization Range for cluster and if Resource Utilization of an Esxi host is below Target Resource Utilization Range, DPM will provide power-off recommendation for that Esxi host.

How Target Resource Utilization Range computed?

DPM use the following formula for calculating Target Resource Utilization Range

Target Resource Utilization Range= Demand Capacity Ratio Target ± Demand Capacity Ratio Tolerance Host

By default Demand Capacity Ratio Target is set to 63% and
Demand Capacity Ratio Tolerance Host is set to 18%
So Target Resource Utilization Range= [{63+18} to {63-18}] = [81 to 45]

So when Target Utilization of an Esxi host is below 45% then power-off recommendation is generated and when Target Utilization of an Esxi host reaches to 81% (either CPU usage or memory usage) DPM will provide power-on recommendations and it will power on those standby Esxi hosts.

Note: Demand Capacity Ratio Target and Demand Capacity Ratio Tolerance Host values can be modified from advance settings of DPM

Demand Capacity Ratio Target can be set from 40% to 90%
And Demand Capacity Ratio Tolerance Host can be set from 10% to 40%

DPM calculates virtual machine average demand over historical period of time and it uses different time interval for providing power-off and power-on recommendations.

For Power-Off recommendation DPM analyzes VM average demand for 40 Minutes.
For Power-On recommendation DPM analyzes VM workloads for 5 Minutes only.

** For DPM performance is at higher priority than Power Saving.

How host selection is done for power-off recommendations

Before selecting Esxi hosts for power-off operations, DPM sorts the hosts in a specific order. If in a cluster there are hosts with DPM automatic mode and hosts with DPM manual mode then they are placed in different groups. Hosts inside automatic mode group are preferred over hosts inside manual mode group.

Host selection when cluster contains heterogeneous sized hosts:

In this case where hosts with different resource capacity are present, DPM gives preference to host with smaller capacity for power-off than hosts with larger capacity.

Host selection when cluster contains homogeneous sized hosts:

In case of homogeneous sized hosts, DPM gives preference to those hosts on which less VM’s are running or you can say where load is least. Heavily loaded hosts will be powered-off in last.

How host selection is done for power-on recommendations

If the resource utilization goes high inside the cluster, DPM considers generating host power-on recommendations. Before selecting an ESXi host for power on, DPM reviews the standby hosts inside the cluster and sorts them in a specific order for DPM power on evaluation process.

Similar to power-off recommendations, ESXi hosts in automatic mode are evaluated before ESXi hosts in manual mode for power-on recommendations.

Host selection when cluster contains heterogeneous sized hosts:

In a cluster containing heterogeneous sized hosts, the ESXi hosts with a larger capacity with regards to the critical resources are favored. This is self-understandable because if a host with larger capacity is brought back online first, DRS can accommodate more number of VM’s on that host.

Host selection when cluster contains heterogeneous sized hosts:

If the sort process discovers equal hosts with respect to the capacity or evacuation cost, DPM will randomize the order of hosts.

Important Note: sorting of the hosts for power-on or power-off recommendations does not determine the actual order for the selection process to power-on or power-off hosts.

DPM Power-Off Cost/Benefit Analysis

Before DPM generates a power-off recommendation, it calculates the costs associated with powering down a host. The following costs are taken into account:
 Migrating virtual machines off the candidate host
 Power consumed during the power-down period
 Unavailable resources of candidate host during power-down
 Loss of performance if candidate host resources are required to meet workload demand
 while candidate host is powered off
 Unavailability of candidate host resources during power-up period
 The power consumed during the power-up period
 Cost of migrating virtual machines to the candidate host

DPM runs power-off cost/benefit analysis and compare cost & risk involved in power-off operation to benefits of powering-off hosts.
Benefits of powering-off hosts < Performance Impact * Power Performance Ratio
Then, Power-off recommendations= Accepted
Power-off recommendations= Rejected

Note: Default value of Power Performance Ratio is 40 but can be set from 0 to 500.

How Power-off Cost and Benefit Calculation is done?

The power-off benefit analysis calculates the StableOffTime value, which indicates the amount of time the candidate host is expected to be powered-off until the cluster needs its resources because of an anticipated increase in virtual machine workload.

StableOffTime = ClusterStableTime – (HostEvacuationTime + HostPowerOffTime)


ClusterStableTime= the time when VM workloads are stable and no power-on operations are needed

HostEvacuationTime= Time taken by DRS to evacuate a host by migrating VM’s.

HostPowerOffTime= Time taken to put a host in sleep mode

Note: VM stable time is calculated by DRS cost-benefit analysis and act as input for ClusterStableTime

Power-off cost is summation of 3 resource costs:

Power-off Cost= Cost of migration of active VM’s from candidate host to other host + unsatisfied VM demand during power-on duration of candidate host + Cost of migration of VM’s back onto candidate host.

How DPM bring backs standby hosts to Power-On state?
DPM uses either WOL (Wake-On-LAN) packets or IPMI v1.5 or HP iLO technology to bring back standby hosts to power-on state when there is increase in workload inside the cluster. For IPMI or iLO to work, server must contain Baseboard Management Controller (BMC). BMC provides access to server hardware from vCenter over LAN connection.

What if the server is not HP server or doesn’t support IPMI v1.5?
In case if a server is not a HP server or doesn’t supports IPMI, DPM uses WOL Magic Packets to bring Esxi hosts out of standby mode. The “Magic Packet” is sent over vMotion network by one of the powered-on Esxi host that is part of the cluster.

Note: When there is both BMC wake method and WOL are present and operational, DPM use BMC by default. The protocol selection order is:


2: iLO

3: WOL.

Posted in: Vmware.
Last Modified: October 25, 2014

Leave a reply