DRS Automation Level and Migration Threshold

DRS migration threshold allows you to specify which recommendations are generated and then applied (when the virtual machines involved in the recommendation are in fully automated mode) or shown (if in manual mode). This threshold is also a measure of how much cluster imbalance across host (CPU and memory) loads is acceptable.

Migration threshold is a measure of how much cluster imbalance is acceptable based on CPU and memory loads. The slider is used to select one of five settings that range from the most conservative (1) to the most aggressive (5). The further the slider moves to the right, the more aggressive DRS will work to balance the cluster.

These threshold values determines which recommendations will be generated when DRS senses an imbalance of cluster. The Conservative setting generates only priority-one recommendations, the next level to the right generates priority-two recommendations and higher, and so on, down to the Aggressive level which generates priority-five recommendations.Read More

Configuring EVC in vSphere 6

The evolution of vSphere has forced the hardware vendors to add more enhanced functionalities in server hardware in order to bring the best out of virtualization.  Enhanced vMotion Compatibility comes into picture when a all Esxi hosts in a cluster are not identical i.e some hosts are from older generation and some from newer generations.

With time an environment grows and vSphere admins keep adding new Esxi hosts in a cluster as per virtual machine resource demands and this is when the mismatch occurs. When a cluster have Esxi hosts from different CPU generations, configuring EVC on cluster ensures that virtual machine migrations between hosts in the cluster do not fail because of CPU feature incompatibilities.

When EVC is enabled for a cluster, all hosts in that cluster are configured to present identical CPU features and ensure CPU compatibility for vMotion. The features presented by each host are determined by selecting a predefined EVC baseline.Read More

Configuring DPM in vSphere 6

What is vSphere Distributed Power Management (DPM)

Consolidation of physical servers into virtual machines that share host physical resources can result in significant reductions in the costs associated with hardware maintenance and power consumption.

vSphere Distributed Power Management provides additional power savings by dynamically consolidating workloads even further during periods of low resource utilization. Virtual machines are migrated onto fewer hosts and the unneeded ESX hosts are powered off. 

When a virtual machine is idle (after business hours) and Esxi host utilization is very low, vCenter suspends the server to save power and, when the workload warrants additional resources, resumes it. VMware DPM is an optional feature of VMware Distributed Resource Scheduler (DRS).

How does DPM actually work?

When you enable DPM on a cluster, the vCenter Server can suspend an Esxi host when during period of low utilization, but bringing back that Esxi host back in business when resource demand increases can only be done by another Esxi host .Read More

Esxi Host Power Management Policies in vSphere 6

One of the advantages which virtualization brought with itself was “POWER SAVINGS” as it enabled administrators to consolidate workloads on fewer number of physical servers and thus save some power and reduce carbon footprint in the datacenter. Sunny Dua rightly mentioned in his blog that “Even before you start realizing the other benefits of virtualization, power bills is the first Opex savings which makes that return on investment on virtualization speak for itself”

Esxi can take advantage of several power management features that the host hardware provides to adjust the trade-off between performance and power use. One obvious question that comes in mind that if I can save more power by using the BIOS features and the hypervisor features to throttle down the CPU frequency, then why should not I go for it?

The answer for this is  “selecting a high-performance policy provides more absolute performance, but at lower efficiency (performance per watt).Read More

Customize SSH and Esxi Shell Settings for Increased Security

The ESXi Shell provides access to maintenance commands and other configuration options. Esxi shell and SSH comes in handy when there are certain tasks that can’t be done through the Web Client or other remote management tools. 

Enabling local and remote shell access on Esxi hosts

Login to vSphere Web Client and select an Esxi host and navigate to Manage > Settings > Security Profile Services and click Edit

serv-1.PNG

We can enable/dsable below services and also can change their start up method:

  • Direct Console UI
  • ESXi Shell
  • SSH

serv-2.PNG

Enabling SSH or local shell through the DCUI.

Go to the console of the host. Press F2 and enter esxi host credentials.

Select Troubleshooting Options and hit Enter on each service you want to enable/disable.

serv-3.PNG

Configuring the Timeout For the ESXi Shell

By default the timeout setting for the ESXi shell is set to disabled. The shell timeout setting allows you to specify how long an inactive session is left open.Read More

Enable and Configure ESXi Host Lockdown Mode

To enhance the security measures in a virtualized environment, it is often advisable to limit direct access to Esxi hosts and this is when lockdown mode concept comes into picture. Lockdown mode is used on Esxi hosts in order to improve security of the hosts which are centrally managed by vCenter server.

When the lockdown mode is enabled, the host is managed using the vSphere Client connected to the managing vCenter Server, VMware PowerCLI, or VMware vSphere Command-Line Interface (vCLI). The only difference is that access is authenticated through the vCenter Server instead of using a local account on the ESXi host.

When the lockdown mode is enabled, access to the host through SSH is unavailable except to configured exception users.

Lockdown mode in vSphere 6.0

With vSphere 6.0, VMware introduced a couple of new concepts into lockdown mode as listed below

  • Normal Lockdown Mode
  • Strict Lockdown Mode
  • Exception Users

Lets understand about these concepts one by one.Read More

Backup and Restore Resource Pool Configurations

When DRS is disabled on a cluster, it removes all the resource pools that are part of the cluster and the resource pool hierarchy and affinity rules are not re-established when DRS is turned back on. 

Now if you really want to disable DRS (for any maintenance activity) and want to save yourself from the pain of re-creating resource pools and configuring share/limits etc, you can take backup of resource pools and and restore it later post completing the maintenance and enabling DRS again.

In my lab I created a resource pool named “RP-Edge” and placed one VM in this resource pool.

rpbkp-0.PNG

When you disable DRS on a cluster, vSphere gives you an opportunity to save the resource pool tree in a file which can be used later to restore the resource pool hierarchy.

Just click on yes on the warning window presented.

rpbkp-1

save the file on your local PC.

rpbkp-2

At this point, the resource pool is gone and the Win-DR-Test VM is out of the resource pool.… Read More

Backup and Restore vDS Configurations

You can export vSphere distributed switch and distributed port group configurations to a file. The file preserves valid network configurations, enabling distribution of these configurations to other deployments.

This functionality is available only with the vSphere Web Client 5.1 or later. However, you can export settings from any version of a distributed switch if you use the vSphere Web Client or later.

To export vSphere Distributed Switch configurations using the vSphere Web Client:
 
1: Browse to a distributed switch in the vSphere Web Client navigator and Right-click the distributed switch and click Settings > Export Configuration
 

vds-bkp-1.PNG

2: Select the Export the distributed switch configuration or Export the distributed switch configuration and all port groups option.

vds-bkp-2.PNG

3: Click Yes to save the configuration file to your local system. 

vds-bkp-3.PNG

Select a location your computer where you want to save the backup file and also provide a name for the backup file.

vds-bkp-4

You now have a configuration file that contains all settings for the selected distributed switch and distributed port group.Read More

Distributed Switch Port Group Bindings

In a vSphere environment where vDS is being used for networking connectivity, there are several options available for what should be the type of port binding that is to be used for a portgroup. Have you ever wondered which Port Binding setting is most suitable for the distributed portgroups to get optimal performance? 

In this post we will be talking about some use cases for using different type of port bindings with vDS.

There are 3 types of Port Binding that is available at portgroup level

  1. Static Binding
  2. Dynamic Binding
  3. Ephemeral Binding

pb-1.PNG

We will discuss about these one by one.

Static Binding

When you connect a virtual machine to a port group configured with static binding, a port is immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is disconnected only when the virtual machine is removed from the port group. You can connect a virtual machine to a static-binding port group only through vCenter Server.Read More

Network IO Control in vSphere 6

In this post we will discuss about what is NIOC and why we need it. We will also configure NIOC in lab. 

What is Network IO Control (NIOC)?

Network I/O Control (NIOC) was first introduced with vSphere 4.1 and it is a vDS feature that allows a vSphere administrator to prioritize different type of network traffic by making use of Resource pools and shares/limits etc. NIOC does the same for network tarffic which SIOC does for storage traffic.

What problem NIOC is solving?

In old days physical servers were equipped with as many as 8 (or more) ethernet cards and administrators (as a best practice) configured vSphere to use dedicated NIC for passing various network traffic like management traffic or vMotion or fault tolerance. Managing these many physical cables were a bit cumbersome.

Modern day servers addressed this issue by introducing servers with support for 10 GBPS/40 GBPS network speed and these servers have only 2 NIC’s and all the traffic is passed via these 2 NIC’s.Read More