Deploying Workloads on vSphere 7 with Tanzu Kubernetes Grid Service (TKGS)

  • Tanzu Kubernetes Grid Service (TKGS)
    Deploy and operate Tanzu Kubernetes clusters natively in vSphere with HA Proxy as the load balancer. Without VMware Harbor as a container repository.

    - Deploying and configuring HA Proxy
    - Deploying workloads via the supervisor cluster (this post)
    - Creating namespaces and initial cluster configuration
  • VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
    Fully featured Tanzu deployment with NSX-T.

    - Deploying and configuring NSX-T
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration

Prerequisites

If you haven’t already please read through our first post on TKGS as it provides a lot of detail on what TKGS is and the configuration we’ll be using for our deployment.

Deploying TKGS Workloads

No that we’ve stood up and successfully configured and tested our HA Proxy. The next stage is to deploy our supervisor cluster. This will manage subsequent Kubernetes clusters. Managing and orchestration, deployment, and management of TKGS clusters.

vSphere workload Management setup

Head on over to your vSphere dashboard. Under shortcuts you’ll see Workload management. When you click on this link. You’ll be presented with a few options. If you see the example below. Your organization is already licensed for Tanzu supervisor clusters. This is a new licensing model separate to the commented model of ESX + Kubernetes. You will require one of these new licenses.

Installing TKGS on vCenter Server Network (Supports Tanzu Kubernetes clusters)
Pre-flight Tanzu checks
Supervisor cluster sizing options
  • Frontend network (VLAN 28)
    Load balancer address: 10.64.1.1–10.64.1.254 (10.64.1.0/24)
  • Type
    Only HA Proxy is available at the moment — so HA Proxy it is.
  • Data plane API addresse(s)
    This is the IP address of the management IP address you provided. Including the data plane management API Port which defaults to 5556.
  • Username
    During the last stage of the HA Proxy setup you provided and username and password. This username will be whatever you provided in the HA Proxy User ID field.
  • Password
    This will be the password for the above information.
  • IP Address ranges for virtual servers
    These are the IP address you provided as your Load balancer IP ranges. Which you’ll have provided as a subnet. This field requires them to be provided as a range. In our HA Proxy we provided 10.64.1.0/24. So we’re going to provide: 10.64.1.1–10.64.1.254. Its very important that this address range does not overlap with any other services or the HA Proxy server itself.
  • Server certificate authority
    This is the certificate authority you provided at the beginning of the HA proxy setup. If you elected to have you generated automatically for you. Then you’ll need to grab it from the server. You can either download it the certificate by visiting the management server and downloading the PEM via Firefox. Or you can fetch it from the HA Proxy server like so:
scp root@10.64.2.10:/etc/haproxy/ca.crt ca.crt && cat ca.crt
Configuring HA proxy within the workload manager
  • Starting IP Address
    Please ensure DHCP is not running on the management network or the IP address have been reserved. The starting IP address is fixed IP address from which the supervisor VMs will be assigned. As there are three by default 10.64.2.50–10.64.2.52 will be allocated.
  • Subnet Mask
    Your management networks subnet mask (255.255.254.0).
  • Gateway server
    Your management networks gateway.
  • DNS server
    Your management networks DNS server or your site wide DNS server.
  • DNS Search domains
    Your DNS domain. This says it optional but I would provide one as we’ve found it improves the setup process and mitigates a few issues (see troubleshooting).
  • NTP server
    Try and provide a local NTP server. Or at least use the NTP servers as configured by your ESX hosts.
Configuring the management network
Workload configuration screen for Tanzu TGKS
  • Port group (V27 Tanzu workload)
    Select our Tanzu workload port group. The same one HA Proxy uses.
  • Gateway (10.64.8.1)
    Your Tanzu workload gateway IP address.
  • Subnet (255.255.248.0)
    Your Tanzu workload subnet 255.255.248.0 (2046 IPs available.
  • IP Address ranges (10.64.8.50–10.64.15.150)
    These will be the entire address range you want to be used for VMs being provisioned by the supervisor cluster. These will form your Kubernetes worker nodes.
Defining our workload cluster
Tanzu content library
Tanzu setup finalization

Tanzu supervisor cluster installation

As the installation kicks off you will see a load of warnings and errors like the one here. This is normal and the setup process will re-try the actions. These errors are usually whilst the system waits for another component to complete is setup. So just be patient the setup can take 15–30 minutes.

Tanzu errors during setup
Resource Type Deployment, Identifier vmware-system-netop/vmwa-resystem-netop-controller-manager is not found.
No resources of type Pod exist for cluster domain-c8
Kubernetes cluster heath endpoint problem at <IP unassigned>. Details: Waiting for API Master IP assignment.
Our working Tanzu cluster.

Connecting to our new supervisor cluster

Lastly, to check everything is in order. Copy and paste the IP address visible from the Overview pane. It will lead you to a page like so:

Kubernetes CLI tools landing page

Troubleshooting

If you can’t visit this page. Then first check to see if HA Proxy has been configured. If you still can’t reach the page check out our HA Proxy troubleshooting steps here. If that still leads to no success you can double check the configuration of your supervisor cluster from the Configuration page of your VMware cluster.

Tanzu configuration settings overview
Kubernetes events

Connecting to the supervisor cluster

Once you’ve download the tool set which is available on Windows, Mac and Linux. Linux I had to guess the URL manually:

wget https://10.64.32.1/wcp/plugin/linux-amd64/vsphere-plugin.zip
$ kubectl vsphere login --server=10.64.32.1 --insecure-skip-tls-verify

Username: administrator@test.corp
Password:
Logged in successfully.

You have access to the following contexts:
10.64.32.1

If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`
$ kubectl config use-context 10.64.32.1
$ kubectl get nodesNAME              STATUS   ROLES    AGE     VERSION
420f1f569680d.. Ready master 5d17h v1.18.2-6+38ac483e736488
420fdbfa2080b.. Ready master 5d17h v1.18.2-6+38ac483e736488
420fe058fd6bd.. Ready master 5d17h v1.18.2-6+38ac483e736488
kubectl api-resources --namespaced=false

Its all over!

Hopefully that’s given you a quick dive into standing up TKGS on vSphere. In our next post we’ll be looking at creating our first namespace. Please feel free to get in touch if you have any questions.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Zercurity

Zercurity

Real-time security and compliance delivered.