TKG Series-Part 2: Deploy TKG Management Cluster

In first post of this series, I discussed about prerequisites that you should meet before attempting to install TKG on vSphere. In this post I will demonstrate TKG management cluster deployment. We can create TKG Management cluster using both UI & CLI. Since I am a fan of cli method, I have used the same in my deployment.

Step 1: Prepare config.yaml file

When TKG cli is installed and tkg command is invoked for first time, it creates a hidden folder .tkg under home directory of user. This folder contains config.yaml file which TKG leverages to deploy management cluster. 

Default yaml file don’t have details of infrastructure such as VC details, VC credentials, IP address etc to be used in TKG deployment. We have to populate infrastructure details in config.yaml file manually.

Below is the yaml file which I used in my environment.

Note: VSPHERE_SSH_AUTHORIZED_KEY is the public key which we generated earlier on bootstrap vm. Paste output of .ssh/id_rsa.pub file in this variable.

Step 2: TKG Cluster Creation Workflow: Before attempting to deploy TKG clusters, let’s learn about the cluster creation workflow. Below sequence of events takes place during a TKG cluster provisioning: 

  • TKG CLI command on bootstrap vm is used to initiate creation of the TKG Management Cluster.
  • On the bootstrap VM, a bootstrap Kubernetes cluster is created using kind.
  • VRDs, controllers, and actuators are installed into the bootstrap cluster.
  • The bootstrap cluster will provision a Management Cluster on the destination infrastructure (vSphere or AWS), installs CRDs, and then copies the Cluster API & objects into Management cluster. 
  • Bootstrap cluster is destroyed from bootstrap vm.

Step 3: Deploy Management Cluster

Management cluster is created using tkg init command. 

# tkg init -i vsphere -p dev -v 6 –name mj-tkgm01

Here:

  • -i stands for infrastructure. TKG can be deployed both on vSphere and Amazon EC2. 
  • -p stands for plan. For lab/poc Dev plan is sufficient. In prod environment, we use plan “Production“. 

Note: In Dev plan, one control plane and one worker node is deployed. In Production plan, there are three control planes and three worker nodes.

Note: To create a management cluster with custom node size for control plane, worker, and load balancer vm’s, we can specify parameters like –controlplane-size, –worker-size, and –haproxy-size options with values of extra-large, large, medium, or small. Specifying these flags override the values provided in the config.yaml file.

Below table summarizes the resource allocation for Control plane, worker and HAProxy VM when deployed using custom size.

Appliance Size vCPU Memory Storage
Small 1 2 20
Medium 2 4 40
Large 2 8 40
Extra Large 4 16 80


Below log entries are recorded on screen while management cluster creation is in progress.

Once management cluster creation task is finished, 3 new vm’s will appear in the resource pool selected for deployment. There will be one instance of load balancer, control-plane node and worker node.

MJ-TKG-Lab06

Step 4: Verify Management Cluster Details

1: List all clusters

2: Get details of management cluster

3: Get context  of all clusters

4: Use Management Cluster Context

5: Get Node Details

6: Get all namespaces

And that completes management cluster deployment.  We are now ready to deploy TKG workload cluster followed by deploying applications. Stay tuned for part 3 of this series !!!!!

I hope you enjoyed reading the post. Feel free to share this on social media if it is worth sharing 🙂 

Leave a Reply