Deploying HA Proxy on vSphere 7 with Tanzu Kubernetes Grid Service (TKGS)

  • Tanzu Kubernetes Grid Service (TKGS)
    Deploy and operate Tanzu Kubernetes clusters natively in vSphere with HA Proxy as the load balancer. Without VMware Harbor as a container repository.

    - Deploying and configuring HA Proxy (this post)
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration
  • VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
    Fully featured Tanzu deployment with NSX-T.

    - Deploying and configuring NSX-T
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration

What is VMwares Tanzu Kubernetes Grid Service (TKGS)?

Tanzu Kubernetes Grid Service, known as TKGS, lets you create and operate Tanzu Kubernetes clusters natively in vSphere. Without having to use the CLI to standup and manage Tanzu supervisor clusters like we had to with Tanzu Kubernetes Grid (TKG). The big benefit of TKGS over TKG is the support and automated management of not only the supervisor clusters via the vSphere interface but also the automated provisioning of Load balancers.

  • NSX-T not required. Virtual distributed switching (VDS) can be used instead to avoid NSX-T licensing.
  • The use of open source HA-Proxy for provisioning Load Balancers
  • Antrea CNI for TKG Pod to Pod Traffic, (Calico CNI also available)
  • No Harbor Image Registry (dependency on PodVM)

Deploying TKGS

This post forms part one of a three part series looking into deploying and settings up TKGS.

Configure networking (VDS)

First thing first, is understanding the network topology for the TKGS. There are three main networks you’ll need to define.

  • Workload network
    This network is where your VMs will reside.
  • Fronted network
    This is the network where your load balancers will be placed.
  • Workload network (VLAN 27)
    Management IP:
    10.64.8.10
    Management Gateway:
    10.64.8.1
    Network:
    10.64.8.0/21
    Subnet:
    255.255.248.0 (2046 IPs available)
    Workload starting address: 10.64.8.50–10.64.15.150
  • Frontend network (VLAN 28)
    Management IP:
    10.64.0.10
    Management Gateway:
    10.64.0.1
    Network: 10.64.0.0/23
    Subnet: 255.255.254.0 (510 IPs available)
    Load balancer address: 10.64.1.1–10.64.1.254
Configuring our Tanzu distributed port groups
Checking our distributed port group up-links.

Configure Tanzu Storage Policy

Just like with our Distributed port groups. Its worthwhile configuring an independent storage policy. This includes tagging any storage vSANs you want the Tanzu cluster to use.

Clone the default storage policy
Enable tag based placement rules
Add the TKG placement tag

Configure content library

The last step before we create our HA Proxy is to create our content library. The content library is to allow Tanzu to fetch the required OVA images it needs in order to create the supervisor cluster and subsequent Tanzu cluster.

https://wp-content.vmware.com/v2/latest/lib.json
Create the Tanzu content library.
https://github.com/haproxytech/vmware-haproxy
https://cdn.haproxy.com/download/haproxy/vsphere/ova/haproxy-v0.1.10.ova
Importing the Tanzu HA Proxy OVA

Deploying HA Proxy

Now on to the “fun” part. Standing up HA Proxy. From the HA Proxy OVA you downloaded. Right click on it and select New VM from this template …

Naming your new HA Proxy
  • Frontend Network
    This will split the Load balanced services out onto their own Network away from the workload network.
Tanzu Frontend Network
Applying the Network configuration settings for HA Proxy
Define the root password for HA Proxy
  • DNS (10.64.2.1)
    This is either your Managements network DNS server. Or a publicly accessible DNS server or one that is available across VLANs and subnets on your private network. In this example we’re using our management network DNS server. Which also happens to be our gateway server.
  • Management IP (10.64.2.10/23)
    This is the Management IP address. This must include the network CIDR.
  • Management Gateway (10.64.2.1)
    This is the gateway for the management network.
Configuring the HA Proxy management network
  • Workload Gateway (10.64.8.1)
    This is the IP address of the gateway on the workload network. This gateway must be routable to the other HA Proxy networks.
  • Frontend IP (10.64.0.10/23)
    This is the frontend IP address for the HA Proxy server residing on the Frontend network. This IP address must be outside the intended range for the load balancers within Kubernetes.
  • Frontend Gateway (10.64.0.1)
    This is the IP address of the gateway on the frontend network. This gateway must be routable to the other HA Proxy networks.
Defining the HA Proxy networks
Defining the load balancer address space
Setting the data-plane password.
https://10.64.2.10:5556/v2/info
JSON output for the HA Proxy service

Checking HA Proxy

You can login via SSH to your HA Proxy server.

ssh root@10.64.2.10
ip a
root@haproxy01 [ ~ ]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: management: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:ce:26 brd ff:ff:ff:ff:ff:ff
inet 10.64.2.10/23 brd 10.64.3.255 scope global management
valid_lft forever preferred_lft forever
3: workload: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:e5:ba brd ff:ff:ff:ff:ff:ff
inet 10.64.8.10/21 brd 10.64.15.255 scope global workload
valid_lft forever preferred_lft forever
4: frontend: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:3d:28 brd ff:ff:ff:ff:ff:ff
inet 10.64.0.10/23 brd 10.64.1.255 scope global frontend
valid_lft forever preferred_lft forever
systemctl list-units --state failed
systemctl status anyip-routes.service
Nov 20 15:30:07 haproxy01.test.corp anyiproutectl.sh[777]: adding route for 10.64.0.1/23
Nov 20 15:30:07 haproxy01.test.corp anyiproutectl.sh[777]: RTNETLINK answers: Invalid argument
root@haproxy01 [ ~ ]# cat /etc/vmware/anyip-routes.cfg 
#
# Configuration file that contains a line-delimited list of CIDR values
# that define the network ranges used to bind the load balancer's frontends
# to virtual IP addresses.
#
# * Lines beginning with a comment character, #, are ignored
# * This file is used by the anyip-routes service
#
10.64.1.0/24

Its all over!

Hopefully that’s given you a quick dive into standing up TKGS on vSphere. In our next post we’ll be looking at running the supervisor cluster which will manage and provision our Kubernetes clusters. Please feel free to get in touch if you have any questions.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Zercurity

Zercurity

Real-time security and compliance delivered.