Private Kubernetes clusters in VMware ESXi environments often require reliable and highly available network gateways to maintain uninterrupted access for internal and external services. Network Address Translation (NAT) gateways built with HAProxy and Keepalived, coupled with a Virtual IP (VIP), provide a resilient solution that automatically redirects traffic in case of node failure. This setup minimizes downtime and maintains consistent connectivity for your Kubernetes workloads.
Configuring Highly Available HAProxy and Keepalived NAT Gateways with VIP
Step 1: Prepare Virtual Machines for Gateway Nodes.
Deploy at least two Linux-based virtual machines (VMs) on your VMware ESXi host. These VMs will serve as your NAT gateways. Assign each VM a static IP address within your management or infrastructure network. Ensure both VMs can reach the Kubernetes cluster nodes and have network access to the outside world for NAT functionality.
Step 2: Install Required Packages.
On each gateway VM, update the package index and install HAProxy and Keepalived. For Ubuntu or Debian systems, use:
sudo apt update
sudo apt install haproxy keepalived
For CentOS or RHEL, use:
sudo yum install haproxy keepalived
Step 3: Configure HAProxy for NAT Traffic Forwarding.
Edit the HAProxy configuration file (usually /etc/haproxy/haproxy.cfg
) to define frontend and backend sections for forwarding traffic to your Kubernetes API servers or load balancer. For example:
frontend kubernetes_api
bind *:6443
default_backend kubernetes_masters
backend kubernetes_masters
balance roundrobin
server master1 10.0.0.11:6443 check
server master2 10.0.0.12:6443 check
This configuration listens on port 6443 and load-balances traffic to the Kubernetes master nodes.
Step 4: Set Up Keepalived with Virtual IP (VIP).
Configure Keepalived to manage a floating VIP between your gateway VMs. Edit /etc/keepalived/keepalived.conf
on both VMs. Example configuration for the primary node:
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass YourSecret
}
virtual_ipaddress {
10.0.0.100
}
}
On the secondary node, set state BACKUP
and priority 100
. The VIP (10.0.0.100
in this example) will float between the nodes, ensuring continuous availability.
Step 5: Enable IP Forwarding and Configure NAT.
Enable IP forwarding to allow the gateway to route traffic:
sudo sysctl -w net.ipv4.ip_forward=1
Persist this change by adding net.ipv4.ip_forward=1
to /etc/sysctl.conf
.
Set up NAT rules using iptables to masquerade outbound traffic from the Kubernetes cluster:
sudo iptables -t nat -A POSTROUTING -s 10.244.0.0/16 -o eth0 -j MASQUERADE
Replace 10.244.0.0/16
with your cluster’s pod network CIDR and eth0
with the appropriate network interface.
Step 6: Start and Enable Services.
Activate and enable HAProxy and Keepalived to ensure they start on boot:
sudo systemctl restart haproxy keepalived
sudo systemctl enable haproxy keepalived
Step 7: Test High Availability and Failover.
Verify that the VIP is assigned to the primary node by running ip a
. Simulate a failure by stopping Keepalived on the primary node (sudo systemctl stop keepalived
) and ensure the VIP moves to the backup node. Test connectivity to the Kubernetes API through the VIP to confirm seamless failover.
Alternative: Using a Dedicated Hardware Load Balancer
While HAProxy and Keepalived provide a flexible, software-based high availability solution, some environments may opt for a dedicated hardware load balancer. Hardware appliances can offer higher throughput and advanced health monitoring, but they require additional investment and may introduce vendor lock-in. For most private Kubernetes clusters on VMware ESXi, the HAProxy and Keepalived approach remains the most effective due to its adaptability and ease of management.
With this setup, your private Kubernetes cluster gains reliable, uninterrupted access through a resilient NAT gateway. Regularly check HAProxy and Keepalived logs to quickly identify and address any network issues.
Member discussion