Skip to content

Latest commit

 

History

History
58 lines (37 loc) · 3.39 KB

06-load-balancers.md

File metadata and controls

58 lines (37 loc) · 3.39 KB

06 Load Balancers

Modern deployments in general, and Kubernetes deployments in particluar, often employ load balancers to distribute load, improve uptime, and increase security.

Our load balancers are highly available, as they run in pairs on separate hosts. In addition to being manageable by API, Terraform, and Ansible, they also come with Cloud Controller Manager (CCM) integration. This means they can easily be integrated into Kubernetes.

💡 Background Information

References

🎓 Exercises

06.01 - Load Balancer Components

Load balancers consist of various components. Read the documentation to understand what these endpoints manage:

  1. /v1/load-balancer
  2. /v1/load-balancers/pools
  3. /v1/load-balancers/pools/<pool-uuid>/members
  4. /v1/load-balancers/listeners
  5. /v1/load-balancers/health-monitors

The underlying technology is HAProxy, but any TCP proxy can help as a mental model. How would you map the above components to the following functions of a TCP proxy deployment?

  • Send probes to servers to see if they should still be contacted.
  • Bind to a port on the public interface to receive traffic.
  • Define a load balanced backend.
  • Launch VMs and configure their outbound interface.
  • Include a set of servers in the load balanced backend.

06.02 - Load Balancer

Create two servers, and have them return an identifier when queried via HTTP (e.g., by installing nginx and changing the default website).

The servers must be in a shared private subnet. It is okay to include a public interface to ease configuration. In a real scenario you might use a bastion host instead, and access the servers from there.

Using your favourite approach (Terraform, Ansible, REST), create a load balancer that includes these servers as pool members. When queried via HTTP, the load balancer should actually return answers from both servers (round-robin).

06.03 - Load Balancer Firewall

Using the load balancer above, limit HTTP access to your servers by manipulating the allowed_cidrs property of your listener.

This property propagates quickly, allowing for automated dynamic firewall access.

06.04 - Kubernetes with Cloudscale Cloud Controller Manager

Create a Kubernetes cluster using k8test. You should use at least one control and one worker node.

Follow the CCM install instructions to install our CCM on the cluster.

Then follow the load balancer example to start a load-balanced Kubernetes deployment.