DRS Automation Level and Migration Threshold

DRS migration threshold allows you to specify which recommendations are generated and then applied (when the virtual machines involved in the recommendation are in fully automated mode) or shown (if in manual mode). This threshold is also a measure of how much cluster imbalance across host (CPU and memory) loads is acceptable.

Migration threshold is a measure of how much cluster imbalance is acceptable based on CPU and memory loads. The slider is used to select one of five settings that range from the most conservative (1) to the most aggressive (5). The further the slider moves to the right, the more aggressive DRS will work to balance the cluster. Read More

Configuring EVC in vSphere 6

The evolution of vSphere has forced the hardware vendors to add more enhanced functionalities in server hardware in order to bring the best out of virtualization.  Enhanced vMotion Compatibility comes into picture when a all Esxi hosts in a cluster are not identical i.e some hosts are from older generation and some from newer generations.

With time an environment grows and vSphere admins keep adding new Esxi hosts in a cluster as per virtual machine resource demands and this is when the mismatch occurs. When a cluster have Esxi hosts from different CPU generations, configuring EVC on cluster ensures that virtual machine migrations between hosts in the cluster do not fail because of CPU feature incompatibilities. Read More

Configuring DPM in vSphere 6

What is vSphere Distributed Power Management (DPM)

Consolidation of physical servers into virtual machines that share host physical resources can result in significant reductions in the costs associated with hardware maintenance and power consumption.

vSphere Distributed Power Management provides additional power savings by dynamically consolidating workloads even further during periods of low resource utilization. Virtual machines are migrated onto fewer hosts and the unneeded ESX hosts are powered off.  Read More

Esxi Host Power Management Policies in vSphere 6

One of the advantages which virtualization brought with itself was “POWER SAVINGS” as it enabled administrators to consolidate workloads on fewer number of physical servers and thus save some power and reduce carbon footprint in the datacenter. Sunny Dua rightly mentioned in his blog that “Even before you start realizing the other benefits of virtualization, power bills is the first Opex savings which makes that return on investment on virtualization speak for itself”

Esxi can take advantage of several power management features that the host hardware provides to adjust the trade-off between performance and power use. One obvious question that comes in mind that if I can save more power by using the BIOS features and the hypervisor features to throttle down the CPU frequency, then why should not I go for it? Read More

Customize SSH and Esxi Shell Settings for Increased Security

The ESXi Shell provides access to maintenance commands and other configuration options. Esxi shell and SSH comes in handy when there are certain tasks that can’t be done through the Web Client or other remote management tools. 

Enabling local and remote shell access on Esxi hosts

Login to vSphere Web Client and select an Esxi host and navigate to Manage > Settings > Security Profile Services and click Edit


We can enable/dsable below services and also can change their start up method: Read More

Enable and Configure ESXi Host Lockdown Mode

To enhance the security measures in a virtualized environment, it is often advisable to limit direct access to Esxi hosts and this is when lockdown mode concept comes into picture. Lockdown mode is used on Esxi hosts in order to improve security of the hosts which are centrally managed by vCenter server.

When the lockdown mode is enabled, the host is managed using the vSphere Client connected to the managing vCenter Server, VMware PowerCLI, or VMware vSphere Command-Line Interface (vCLI). The only difference is that access is authenticated through the vCenter Server instead of using a local account on the ESXi host. Read More

Backup and Restore Resource Pool Configurations

When DRS is disabled on a cluster, it removes all the resource pools that are part of the cluster and the resource pool hierarchy and affinity rules are not re-established when DRS is turned back on. 

Now if you really want to disable DRS (for any maintenance activity) and want to save yourself from the pain of re-creating resource pools and configuring share/limits etc, you can take backup of resource pools and and restore it later post completing the maintenance and enabling DRS again.

In my lab I created a resource pool named “RP-Edge” and placed one VM in this resource pool. Read More

Backup and Restore vDS Configurations

You can export vSphere distributed switch and distributed port group configurations to a file. The file preserves valid network configurations, enabling distribution of these configurations to other deployments.

This functionality is available only with the vSphere Web Client 5.1 or later. However, you can export settings from any version of a distributed switch if you use the vSphere Web Client or later.

To export vSphere Distributed Switch configurations using the vSphere Web Client:   1: Browse to a distributed switch in the vSphere Web Client navigator and Right-click the distributed switch and click Settings > Export Configuration  


2: Select the Export the distributed switch configuration or Export the distributed switch configuration and all port groups option. Read More

Distributed Switch Port Group Bindings

In a vSphere environment where vDS is being used for networking connectivity, there are several options available for what should be the type of port binding that is to be used for a portgroup. Have you ever wondered which Port Binding setting is most suitable for the distributed portgroups to get optimal performance? 

In this post we will be talking about some use cases for using different type of port bindings with vDS.

There are 3 types of Port Binding that is available at portgroup level

  1. Static Binding
  2. Dynamic Binding
  3. Ephemeral Binding

pb-1.PNG Read More

Network IO Control in vSphere 6

In this post we will discuss about what is NIOC and why we need it. We will also configure NIOC in lab. 

What is Network IO Control (NIOC)?

Network I/O Control (NIOC) was first introduced with vSphere 4.1 and it is a vDS feature that allows a vSphere administrator to prioritize different type of network traffic by making use of Resource pools and shares/limits etc. NIOC does the same for network tarffic which SIOC does for storage traffic.

What problem NIOC is solving?

In old days physical servers were equipped with as many as 8 (or more) ethernet cards and administrators (as a best practice) configured vSphere to use dedicated NIC for passing various network traffic like management traffic or vMotion or fault tolerance. Managing these many physical cables were a bit cumbersome. Read More

vSwitch NIC Teaming and Network Failure Detection Policies

What is NIC Teaming and why you need it?

Uplinks is what provides connectivity between a vSwitch and a physical  switch. This uplinks passes all the traffic generated by virtual machines or the vmkernel adapters. 

But what happens when that physical network adapter fails, when the cable connecting that uplink to the physical network fails, or the upstream physical switch to which that uplink is connected fails? With a single uplink, network connectivity to the entire vSwitch and all of its ports or port groups is lost. This is where NIC teaming comes in. Read More

Switch Discovery Protocols

In physical networking space, switches are connected to one or more adjacent switch forming a web of switches which can talk to each other. This web of switches is referred as “neighbourhood of switching”.

Virtual switches (standard or vDS) are connected to these physical switches via physical uplinks. These uplinks are terminating at a particular port of the physical switch and that port itself have some characteristics like a VLAN ID etc defined there. These characteristic values are not exposed to virtual switches by default. Read More

Configuring QoS and Traffic Filtering in vSphere 6

During my VCAP6-Deploy exam preparation, I found this topic quiet a bit interesting and difficult as well as I have never ever laid my hands on Quality of Service type of thinks in respect of networking. Also my concepts were not very clear on topics like DSCP, QoS, COS etc, so I decided to learn more about these this time and write a blog post on the same.

What is Quality of Service (QoS) and Traffic filtering?

In a vSphere distributed switch 5.5 and later, by using the traffic filtering and marking policy, you can protect the virtual network from unwanted traffic and security attacks or apply a QoS tag to a certain type of traffic. Read More

Configuring and Managing VMkernel TCP/IP Stacks

While working through VCAP6-Deploy blueprint I stumbled upon topic TCP/IP stack configuration. I have seen this many times in my lab while configuring networking stuffs and had a basic idea of what is the purpose of using custom TCP/IP stack. Suprisingly this feature is there with vSphere 5.5, but I never noticed it.

I decided to discover more about TCP/IP stack in my lab and share my experience through this write up. I will start with very basic.

What is VMkernel TCP/IP and why to use it?

The purpose of a TCP/IP Stack configuration in VMware vSphere Hosts is to setup the Networking Parameters which will allow the communication between the Hosts themselves including the Virtual Machines, other Virtual Appliances and last but not least the Network Storage. Read More

Configuring and Administering Storage Policies in vSphere 6

vSphere storage profiles were first introduced with vSphere 5, then renamed to storage policies with the release of vSphere 5.5.

Storage policy aims to logically separate storage by using storage capabilities and storage profiles in order to guarantee a predesignated quality-of-service of storage resources to a virtual machine.

The storage policy is used to map the defined storage capabilities to a virtual machine,
and specifically, its virtual disks. The policy is based on the storage requirements for each virtual disk that guarantees a certain storage feature or capability. Read More