Deploying Workloads on vSphere 7 with Tanzu Kubernetes Grid Service (TKGS)

  • Tanzu Kubernetes Grid (TKG)
    Deploy Kubernetes via Tanzu (TKG) without the need for a licensed Tanzu supervisor cluster. This does not provide a load balancer.
  • Tanzu Kubernetes Grid Service (TKGS)
    Deploy and operate Tanzu Kubernetes clusters natively in vSphere with HA Proxy as the load balancer. Without VMware Harbor as a container repository.

    - Deploying and configuring HA Proxy
    - Deploying workloads via the supervisor cluster (this post)
    - Creating namespaces and initial cluster configuration
  • VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
    Fully featured Tanzu deployment with NSX-T.

    - Deploying and configuring NSX-T
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration


Deploying TKGS Workloads

vSphere workload Management setup

Installing TKGS on vCenter Server Network (Supports Tanzu Kubernetes clusters)
Pre-flight Tanzu checks
Supervisor cluster sizing options
  • Management network (VLAN 26)
    Management IP:
  • Frontend network (VLAN 28)
    Load balancer address:– (
  • Name
    This is the DNS name of our HA Proxy. This will be the first part of the host name you entered during the HA Proxy setup.
  • Type
    Only HA Proxy is available at the moment — so HA Proxy it is.
  • Data plane API addresse(s)
    This is the IP address of the management IP address you provided. Including the data plane management API Port which defaults to 5556.
  • Username
    During the last stage of the HA Proxy setup you provided and username and password. This username will be whatever you provided in the HA Proxy User ID field.
  • Password
    This will be the password for the above information.
  • IP Address ranges for virtual servers
    These are the IP address you provided as your Load balancer IP ranges. Which you’ll have provided as a subnet. This field requires them to be provided as a range. In our HA Proxy we provided So we’re going to provide:– Its very important that this address range does not overlap with any other services or the HA Proxy server itself.
  • Server certificate authority
    This is the certificate authority you provided at the beginning of the HA proxy setup. If you elected to have you generated automatically for you. Then you’ll need to grab it from the server. You can either download it the certificate by visiting the management server and downloading the PEM via Firefox. Or you can fetch it from the HA Proxy server like so:
scp root@ ca.crt && cat ca.crt
Configuring HA proxy within the workload manager
  • Network
    This is the management virtual distributed switch we created on VLAN 26.
  • Starting IP Address
    Please ensure DHCP is not running on the management network or the IP address have been reserved. The starting IP address is fixed IP address from which the supervisor VMs will be assigned. As there are three by default– will be allocated.
  • Subnet Mask
    Your management networks subnet mask (
  • Gateway server
    Your management networks gateway.
  • DNS server
    Your management networks DNS server or your site wide DNS server.
  • DNS Search domains
    Your DNS domain. This says it optional but I would provide one as we’ve found it improves the setup process and mitigates a few issues (see troubleshooting).
  • NTP server
    Try and provide a local NTP server. Or at least use the NTP servers as configured by your ESX hosts.
Configuring the management network
Workload configuration screen for Tanzu TGKS
  • Name
    You can give it any name you like. However, it must be alpha numeric.
  • Port group (V27 Tanzu workload)
    Select our Tanzu workload port group. The same one HA Proxy uses.
  • Gateway (
    Your Tanzu workload gateway IP address.
  • Subnet (
    Your Tanzu workload subnet (2046 IPs available.
  • IP Address ranges (–
    These will be the entire address range you want to be used for VMs being provisioned by the supervisor cluster. These will form your Kubernetes worker nodes.
Defining our workload cluster
Tanzu content library
Tanzu setup finalization

Tanzu supervisor cluster installation

Tanzu errors during setup
Resource Type Deployment, Identifier vmware-system-netop/vmwa-resystem-netop-controller-manager is not found.
No resources of type Pod exist for cluster domain-c8
Kubernetes cluster heath endpoint problem at <IP unassigned>. Details: Waiting for API Master IP assignment.
Our working Tanzu cluster.

Connecting to our new supervisor cluster

Kubernetes CLI tools landing page


Tanzu configuration settings overview
Kubernetes events

Connecting to the supervisor cluster

$ kubectl vsphere login --server= --insecure-skip-tls-verify

Username: administrator@test.corp
Logged in successfully.

You have access to the following contexts:

If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`
$ kubectl config use-context
$ kubectl get nodesNAME              STATUS   ROLES    AGE     VERSION
420f1f569680d.. Ready master 5d17h v1.18.2-6+38ac483e736488
420fdbfa2080b.. Ready master 5d17h v1.18.2-6+38ac483e736488
420fe058fd6bd.. Ready master 5d17h v1.18.2-6+38ac483e736488
kubectl api-resources --namespaced=false

Its all over!




Real-time security and compliance delivered.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

NavigationLink in SwiftUI (Swift)

Top Reasons Why Flutter Is Better For App Development?

Spawning Your First Virtual Machine Using VMWare

MileVerse Launches their End of Year Event with the Lowest Prices for Select Products


Discover Your Process Loss from Code


Software Development Trends 2020

Using Docker compose to setup a simple Django/PostgreSQL application

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Real-time security and compliance delivered.

More from Medium

Pull Images into ACR and Build Helm Chart Using ACR images and deploy applications in Kubernetes

K8s || On-Prem

Running Spacelift CI/CD workers in Kubernetes using DinD

Full setup of a Kubernetes K3D cluster managed by Flux on a local Git server

Youtube recording of the live session on the interactive mode