The configuration of Kubernetes clusters in this repository is structured in the following folders:
Contains the Kustomization objects to sync with each cluster (test
and production
). The repository leverages Kustomize overlays to apply different security and application settings for TEST
and PRODUCTION
clusters.
Contains a ConfigMap created on the flux-system
namespace to pass values of resources created externally to the cluster to HelmReleases. Examples of these are IAM Role ARN and clusterName for aws-load-balancer-controller add-on. The sample uses this ConfigMap in the Infrastructure Kustomization to perform variable substitution.
Contains the add-ons to be installed on the cluster as well as the source objects for the add-ons and applications installed. It's divided into two subfolders:
- sources: contains all the GitRepository and HelmRepository objects for the add-ons that will be installed.
- add-ons: contains the Kustomization, HelmRelease and other manifests to install and configure the add-ons listed on the README.
The config/base/policies
folder contains a set of Kyverno policies implementing a common set of security best practices. Then, there are kustomize overlays for test
and production
to configure Kyverno in audit
or enforce
mode.
These policies are meant to be used as an example and a starting point to define your own policies. This is not meant to be a comprehensive implementation of security best practices. To learn about EKS Security Best practices recommendations, please visit the EKS Best Practices guides here.
Contains the tenants that are on-boarded to the cluster. The example contains a single tenant named podinfo-team
and creates the following resources:
- A namespace for the team
- A ServiceAccount within the namespace with a RoleBinding to ClusterRole
cluster-admin
for Flux to impersonate when reconciling tenant resources. You can further constrain permissions to limit the resources that tenants can create. - A GitRepository Custom Resource pointing to the tenant repository: flux-eks-gitops-config-tenant
- A Kustomization Custom Resource pointing to the above repository to reconcile tenant resources.
Kustomize overlays are used to patch test
and production
deployments with specific values.
tenant repository flux-eks-gitops-config-tenant
Contains the tenant applications to be deployed on the cluster. This sample uses podinfo as example application and uses Kustomize overlays to set up spefic configuration to TEST
and PRODUCTION
clusters.
The application is configured to use progressive deployments orchestrated by Flagger and nginx ingress controller. ServiceMonitors are also configured for the Prometheus Operator to scrape application metrics and a MetricTemplate to define the Canary metric to be used for our deployments. These objects are defined in base/podinfo/canary.yaml
on the flux-eks-gitops-config-tenant repository.
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo-canary
namespace: podinfo
spec:
# service mesh provider can be: kubernetes, istio, appmesh, nginx, gloo
provider: nginx
service:
port: 80
targetPort: 9898
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
#Ingress reference
ingressRef:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before rollback (default 600s)
progressDeadlineSeconds: 60
analysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
# NGINX Prometheus checks
metrics:
- name: error-rate
templateRef:
name: error-rate
namespace: podinfo
thresholdRange:
max: 1
interval: 1m
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.flagger-system/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary.podinfo/token | grep token"
- name: load-test
url: http://flagger-loadtester.flagger-system/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 -host podinfo.test http://myapp.example.com"
To test progressive deployments of podinfo go to Flagger Canary Deployments.
The repository specifies dependencies between FluxCD Kustomization and HelmRelease objects as follows:
Flux starts sync'ing the infrastructure Kustomization. Once it's sync'ed and passes health checks, it will sync the config Kustomization and finally the apps kustomization.
external-config-data
|-- infrastructure
|-- config
|-- tenants
Within the infrastructure Kustomization, this repo defines the following dependencies between HelmReleases:
[ aws-load-balancer-controller | calico | kube-prometheus-stack ]
|-------- [ kyverno | ingress-nginx ]
|-------- [ flagger ]