Modern deployments in general, and Kubernetes deployments in particluar, often employ load balancers to distribute load, improve uptime, and increase security.
Our load balancers are highly available, as they run in pairs on separate hosts. In addition to being manageable by API, Terraform, and Ansible, they also come with Cloud Controller Manager (CCM) integration. This means they can easily be integrated into Kubernetes.
- API Documentation
- Ansible cloud collection.
- Terraform provider.
- Cloudscale Cloud Controller Manager
- K8test
- Blog Post: K8s Cloud Controller Manager for cloudscale
- Blog Post: Load Balancer "as a Service"
Load balancers consist of various components. Read the documentation to understand what these endpoints manage:
/v1/load-balancer
/v1/load-balancers/pools
/v1/load-balancers/pools/<pool-uuid>/members
/v1/load-balancers/listeners
/v1/load-balancers/health-monitors
The underlying technology is HAProxy, but any TCP proxy can help as a mental model. How would you map the above components to the following functions of a TCP proxy deployment?
- Send probes to servers to see if they should still be contacted.
- Bind to a port on the public interface to receive traffic.
- Define a load balanced backend.
- Launch VMs and configure their outbound interface.
- Include a set of servers in the load balanced backend.
Create two servers, and have them return an identifier when queried via HTTP (e.g., by installing nginx
and changing the default website).
The servers must be in a shared private subnet. It is okay to include a public interface to ease configuration. In a real scenario you might use a bastion host instead, and access the servers from there.
Using your favourite approach (Terraform, Ansible, REST), create a load balancer that includes these servers as pool members. When queried via HTTP, the load balancer should actually return answers from both servers (round-robin).
Using the load balancer above, limit HTTP access to your servers by manipulating the allowed_cidrs
property of your listener.
This property propagates quickly, allowing for automated dynamic firewall access.
Create a Kubernetes cluster using k8test. You should use at least one control and one worker node.
Follow the CCM install instructions to install our CCM on the cluster.
Then follow the load balancer example to start a load-balanced Kubernetes deployment.