Deploying HA Proxy on vSphere 7 with Tanzu Kubernetes Grid Service (TKGS)

  • Tanzu Kubernetes Grid (TKG)
    Deploy Kubernetes via Tanzu (TKG) without the need for a licensed Tanzu supervisor cluster. This does not provide a load balancer.
  • Tanzu Kubernetes Grid Service (TKGS)
    Deploy and operate Tanzu Kubernetes clusters natively in vSphere with HA Proxy as the load balancer. Without VMware Harbor as a container repository.

    - Deploying and configuring HA Proxy (this post)
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration
  • VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
    Fully featured Tanzu deployment with NSX-T.

    - Deploying and configuring NSX-T
    - Deploying workloads via the supervisor cluster
    - Creating namespaces and initial cluster configuration

What is VMwares Tanzu Kubernetes Grid Service (TKGS)?

  • VMware Cloud Formation (VCF) is not a requirement.
  • NSX-T not required. Virtual distributed switching (VDS) can be used instead to avoid NSX-T licensing.
  • The use of open source HA-Proxy for provisioning Load Balancers
  • Antrea CNI for TKG Pod to Pod Traffic, (Calico CNI also available)
  • No PodVM support if NSX-T not used
  • No Harbor Image Registry (dependency on PodVM)

Deploying TKGS

Configure networking (VDS)

  • Management network
    This will be your administrative management network. Where you’ll be able to access the HA proxy via SSH and administer the machine. The management network is also where the primary configuration service is defined on port 5556. For the TKGS cluster to configure the HA Proxy.
  • Workload network
    This network is where your VMs will reside.
  • Fronted network
    This is the network where your load balancers will be placed.
  • Management network (VLAN 26)
    Management IP:
    10.64.2.10
    Management Gateway:
    10.64.2.1
    Network: 10.64.2.0/23
    Subnet: 255.255.254.0 (510 IPs available)
  • Workload network (VLAN 27)
    Management IP:
    10.64.8.10
    Management Gateway:
    10.64.8.1
    Network:
    10.64.8.0/21
    Subnet:
    255.255.248.0 (2046 IPs available)
    Workload starting address: 10.64.8.50–10.64.15.150
  • Frontend network (VLAN 28)
    Management IP:
    10.64.0.10
    Management Gateway:
    10.64.0.1
    Network: 10.64.0.0/23
    Subnet: 255.255.254.0 (510 IPs available)
    Load balancer address: 10.64.1.1–10.64.1.254
Configuring our Tanzu distributed port groups
Checking our distributed port group up-links.

Configure Tanzu Storage Policy

Clone the default storage policy
Enable tag based placement rules
Add the TKG placement tag

Configure content library

https://wp-content.vmware.com/v2/latest/lib.json
Create the Tanzu content library.
https://github.com/haproxytech/vmware-haproxy
https://cdn.haproxy.com/download/haproxy/vsphere/ova/haproxy-v0.1.10.ova
Importing the Tanzu HA Proxy OVA

Deploying HA Proxy

Naming your new HA Proxy
  • Default
    This will still let you provision Load balanced services within Kubernetes. It just means that the Load Balancer IP address will be provisioned on the same network as the workload network.
  • Frontend Network
    This will split the Load balanced services out onto their own Network away from the workload network.
Tanzu Frontend Network
Applying the Network configuration settings for HA Proxy
Define the root password for HA Proxy
  • Host name (haproxy01.test.corp)
    This needs to be a fully qualified host name for the HA Proxy. This should include your network domain name.
  • DNS (10.64.2.1)
    This is either your Managements network DNS server. Or a publicly accessible DNS server or one that is available across VLANs and subnets on your private network. In this example we’re using our management network DNS server. Which also happens to be our gateway server.
  • Management IP (10.64.2.10/23)
    This is the Management IP address. This must include the network CIDR.
  • Management Gateway (10.64.2.1)
    This is the gateway for the management network.
Configuring the HA Proxy management network
  • Workload IP (10.64.8.10/21)
    This is the workload IP address for the HA Proxy server residing on the Workloads network. This IP address must be outside the intended range for the provisioning of servers.
  • Workload Gateway (10.64.8.1)
    This is the IP address of the gateway on the workload network. This gateway must be routable to the other HA Proxy networks.
  • Frontend IP (10.64.0.10/23)
    This is the frontend IP address for the HA Proxy server residing on the Frontend network. This IP address must be outside the intended range for the load balancers within Kubernetes.
  • Frontend Gateway (10.64.0.1)
    This is the IP address of the gateway on the frontend network. This gateway must be routable to the other HA Proxy networks.
Defining the HA Proxy networks
Defining the load balancer address space
Setting the data-plane password.
https://10.64.2.10:5556/v2/info
JSON output for the HA Proxy service

Checking HA Proxy

ssh root@10.64.2.10
ip a
root@haproxy01 [ ~ ]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: management: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:ce:26 brd ff:ff:ff:ff:ff:ff
inet 10.64.2.10/23 brd 10.64.3.255 scope global management
valid_lft forever preferred_lft forever
3: workload: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:e5:ba brd ff:ff:ff:ff:ff:ff
inet 10.64.8.10/21 brd 10.64.15.255 scope global workload
valid_lft forever preferred_lft forever
4: frontend: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8f:3d:28 brd ff:ff:ff:ff:ff:ff
inet 10.64.0.10/23 brd 10.64.1.255 scope global frontend
valid_lft forever preferred_lft forever
systemctl list-units --state failed
systemctl status anyip-routes.service
Nov 20 15:30:07 haproxy01.test.corp anyiproutectl.sh[777]: adding route for 10.64.0.1/23
Nov 20 15:30:07 haproxy01.test.corp anyiproutectl.sh[777]: RTNETLINK answers: Invalid argument
root@haproxy01 [ ~ ]# cat /etc/vmware/anyip-routes.cfg 
#
# Configuration file that contains a line-delimited list of CIDR values
# that define the network ranges used to bind the load balancer's frontends
# to virtual IP addresses.
#
# * Lines beginning with a comment character, #, are ignored
# * This file is used by the anyip-routes service
#
10.64.1.0/24

Its all over!

--

--

--

Real-time security and compliance delivered.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

RUST on programming language

The EmiSwap Roadmap: Our Way to a Perfect Product

Phase-0 Complete

Crawling Python

EmiSwap Explained: How to Purchase ESW tokens

Theoretical knowledge vs Practical application approaches to learning software engineering

ChainSafe Releases Go-Schnorrkel

Getting Started with TotalCross using VSCode IDE

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Zercurity

Zercurity

Real-time security and compliance delivered.

More from Medium

K8s || On-Prem

Automating Hashicorp Vault Secrets Retrieval with Ansible

Add IAM Users to AWS-AUTH config to access EKS Cluster

What’s new in the Terraform Provider for OpenNebula 0.5.0?