This document will walk-through how to create three managed Kubernetes clusters on separate providers (Google, Amazon and Digitalocean), deploying:
-
Dex as the OIDC issuer for all clusters running only in the master cluster.
-
Gangway web server to authenticate users to Dex and help generate Kubeconfig files.
-
kube-oidc-proxy to expose all clusters to OIDC authentication.
-
Contour as the ingress controller with TLS SNI passthrough enabled.
-
Cert-Manager to issue and manage certificates.
It will also demonstrate how to enable different authentication methods that Dex supports, namely, username and password, and GitHub, however more are available.
The tutorial will be using Cert-Manager to generate certificates signed by Let's Encrypt for components in all clouds using a DNS challenge. Although not the only way to generate certificates, the tutorial assumes that a domain will be used which belongs to your Google Cloud project, and records of sub-domains of this domain will be created to assign DNS to the components. A Google Cloud Service Account will be created to manage these DNS challenges and it's secrets passed to Cert-Manager.
A Service Account has been created for Terraform with its secrets stored at
~/.config/gcloud/terraform-admin.json
. The Service Account needs at least
these IAM Roles attached:
Compute Admin
Kubernetes Engine Admin
DNS Administrator
Security Reviewer
Service Account Admin
Service Account Key Admin
Project IAM Admin
You have an AWS account with permissions to create an EKS cluster and other
relevent permissions to create a fully fledged cluster, including creating
load balancers, instance pools etc. Typically, these environment variables must
be set when running terraform
and deploying the manifests before OIDC
authentication has been set up:
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
AWS_ACCESS_KEY_ID
For Digitalocean you need to get an write token from the console and export it using this environment variable:
DIGITALOCEAN_TOKEN
First the clusters will be created, along with secrets to be used for OIDC authentication for each cluster. The Amazon and Digitalocean Terraform module has dependant resources on the Google module, so the Google module must be created first.
CLOUD=google make terraform_apply
CLOUD=amazon make terraform_apply
CLOUD=digitalocean make terraform_apply
This will create each cluster and a Service Account to manage Google Cloud DNS
records for DNS challenges and OIDC secrets for all clusters. It should
generate a JSON configuration file for each cluster in
./manifests/[google|amazon|digitalocean]-config.json
respectively.
If you wish to use a custom CA to sign certificates for the kube-oidc-proxy
then this is possible by setting the environment variables CA_CRT_FILE
and
CA_KEY_FILE
to the full file path of the CA certificate and private key
respectively. After a terraform apply, these will be stored in the terraform
state and will eventually be uploaded to Kubernetes as a Secret. Cert-manager
will then issue the kube-oidc-proxy with a signed certificate from this CA.
Copy config.dist.jsonnet
to config.jsonnet
. This file will hold
configuration for setting up the OIDC authentication in all clusters as well as
assigning DNS. Firstly, determine what base_domain
will be used for this
demo. Ensure the base_domain
starts with a .
.
The domain, which needs to be managed in Google Cloud DNS will have records like this:
dex.mydomain.company.net
gangway-gke.mydomain.company.net
gangway-eks.mydomain.company.net
gangway-dok.mydomain.company.net
Populate the configuration file with its corresponding domain
(.mydomain.company.net
in our example) and Let's Encrypt contact email.
Since the GKE cluster will be hosting Dex, the OIDC issuer, its configuration file must contain how or what users will use to authenticate. Here we will show two methods, username and password, and GitHub.
Usernames and passwords can be populated with the following block within the
dex
block.
dex+: if $.master then {
users: [
$.dex.Password('[email protected]', '$2y$10$i2.tSLkchjnpvnI73iSW/OPAVriV9BWbdfM6qemBM1buNRu81.ZG.'), // plaintext: secure
],
} else {
},
The username will be the username used by the user to authenticate and the user identity used for RBAC within Kubernetes. The password is a bcrypt encryption hash of the plain text password. This can be generated by the following:
htpasswd -bnBC 10 "" MyVerySecurePassword | tr -d ':'
Dex also supports multiple 'connectors' that enable third party applications to
provide OAuth to it's system. For GitHub, this involves creating an 'OAuth App'.
The Authorization callback URL
should be populated with the Dex callback URL, i.e.
https://dex.gke.mydomain.company.net/callback
.
The resulting Client ID
and Client Secret
can then be used to populate the
configuration file:
dex+: if $.master then {
connectors: [
$.dex.Connector('github', 'GitHub', 'github', {
clientID: 'myGithubAppClientID',
clientSecret: 'myGithubAppClientSecret',
orgs: [{
name: 'company',
}],
}),
],
} else {
},
You can find more information on GitHub OAuth apps here.
Once the configuration file has been created the manifests can be deployed.
$ CLOUD=google manifests_apply
You should then see the components deployed to the cluster in the auth
namespace.
export KUBECONFIG=.kubeconfig-google
$ kubectl get po -n auth
NAME READY STATUS RESTARTS AGE
contour-55c46d7969-f9gfl 2/2 Running 0 46s
dex-7455744797-p8pql 0/1 ContainerCreating 0 12s
gangway-77dfdb68d-x84hj 0/1 ContainerCreating 0 11s
Verify that the ingress has been configured to what you were expecting.
$ kubectl get ingressroutes -n auth
You should now see the DNS challenge attempting to be fullfilled by Cert-Manager in your DNS Zone details in the Google Cloud console.
Once complete, three TLS secrets will be generated, gangway-tls
, dex-tls
,
and kube-oidc-proxy-tls
.
$ kubectl get -n auth secret
You can save these certifcates locally, and restore them any time using:
$ make manifests_backup_certificates
$ make manifests_restore_certificates
You can check that the DNS record has been propagated by trying to resolve it using:
$ host https://gangway-gke.mydomain.company.net
Once propagated, you can then visit the Gangway URL, follow the instructions and download your Kubeconfig with OIDC authentication, pointing to the kube-oidc-proxy. Trying the Kubeconfig, you should be greeted with an error message that your OIDC username does not have enough RBAC permissions to access that resource.
The EKS and Digitalocean cluster manifests can now be deployed:
$ CLOUD=amazon manifests_apply
$ CLOUD=digitalocean manifests_apply
Get the AWS DNS URL for the Contour Load Balancer.
$ export KUBECONFIG=.kubeconfig-amazon
$ kubectl get svc -n auth
When components have their TLS secrets, you will then be able to login to the Gangway portal on Amazon/DigitalOcean and download your Kubeconfig. Again, when trying this Kubeconfig, you will initially be greeted with an "unauthorized" error message until RBAC permissions have been granted to this user.