Take OpenZiti for a spin with Terraform on Kubernetes. This will guide you to apply a Terraform plan for each stage:
- Create a cluster with cert-manager and ingress-nginx.
- Deploy OpenZiti and enable Console and CLI access.
- Provision an OpenZiti router.
- Create some OpenZiti Services for cluster workloads.
- Configure your OpenZiti Client for network access.
ingress-nginx
w/ Nodebalancercert-manager
w/ Let's Encrypt issuer andtrust-manager
ziti-controller
ziti-console
ziti-router
httpbin
demo API
ziti
helm
kubectl
terraform
ansible
withansible-galaxy collection install kubernetes.core
pip install --user kubernetes jmespath dnspython
- an OpenZiti Tunneler
k9s
curl
jq
-
Delegate a DNS zone to Linode's NSs so Terraform can manage the domain. For example, to delegate my-ziti-cluster.example.com to Linode, you need to create NS records in example.com named "my-ziti-cluster." You can verify it's working by checking the NS records with
dig
or Google DNS Toolbox (record typeNS
).$ dig +noall +answer my-ziti-cluster.example.com. NS my-ziti-cluster.example.com. 1765 IN NS ns5.linode.com. my-ziti-cluster.example.com. 1765 IN NS ns4.linode.com. my-ziti-cluster.example.com. 1765 IN NS ns2.linode.com. my-ziti-cluster.example.com. 1765 IN NS ns3.linode.com. my-ziti-cluster.example.com. 1765 IN NS ns1.linode.com.
This is an optional step. If you plan to use the OpenZiti network for anything important, then go ahead and do this. The plans use the local state backend, which will work fine for demonstration purposes.
-
Configure your shell env for this TF plan.
export TF_VAR_LINODE_TOKEN=XXX # Linode API token export KUBECONFIG=./kube-config # TF will write this file in plan dir
Note: If you want to save state in Terraform Cloud, configure the additional env vars, uncomment
cloud {}
and comment outbackend "local" {}
inmain.tf
file.export TF_CLOUD_ORGANIZATION=XXX export TF_WORKSPACE=XXX
Furthermore, you need to select your remote workspace to only save state, and not run the plan remotely. The default execution mode in Terraform Cloud is remote execution, and that will not work with this plan because it uses some Ansible which.
This first TF plan creates the LKE cluster and installs an OpenZiti Controller and Console with a Let's Encrypt certificate.
-
In
./plan-10-k8s/terraform.tfvars
, specify the Linode size and count, etc., e.g.,label = "my-ziti-cluster" email = "[email protected]" dns_zone = "my-ziti-cluster.example.com" region = "us-west" pools = [ { type : "g6-standard-2" count : 2 } ] tags = ["alice-ziti-lab"] LINODE_TOKEN={token} # or set env var TF_VAR_LINODE_TOKEN
-
Initialize the workspace.
(cd ./plan-10-k8s/; terraform init;)
-
Perform a dry run.
(cd ./plan-10-k8s/; terraform plan;)
-
Apply the plan.
(cd ./plan-10-k8s/; terraform apply;)
-
Test cluster connection.
# KUBECONFIG=./kube-config kubectl cluster-info
-
Print the Ziti login credential.
kubectl -n ziti-controller get secrets ziti-controller-admin-secret \ -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
Example output:
$ kubectl -n ziti-controller get secrets ziti-controller-admin-secret \ -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}' admin-password: Gj63NwmZUJPwXsqbkzx8eQ6cdG8YBxP7 admin-user: admin
-
Visit the console: https://console.my-ziti-cluster.example.com
-
Check the certificate. If it's from "(STAGING) Let's Encrypt" then the certificate issuer is working. If not, it's probably DNS.
openssl s_client -connect console.my-ziti-cluster.example.com:443 <> /dev/null \ |& openssl x509 -noout -subject -issuer
$ openssl s_client -connect console.my-ziti-cluster.example.com:443 <> /dev/null \ |& openssl x509 -noout -subject -issuer subject=CN = console.my-ziti-cluster.example.com issuer=C = US, O = (STAGING) Let's Encrypt, CN = (STAGING) Artificial Apricot R3
-
Optionally, switch to Let's Encrypt Prod issuer for a real certificate. Uncomment these lines in
terraform.tfvars
and runterraform apply
. The cert rate limit is real, hence Staging.cluster_issuer_name = "cert-manager-production" cluster_issuer_server = "https://acme-v02.api.letsencrypt.org/directory"
This first TF plan creates the LKE cluster. This one deploys the OpenZiti Controller.
-
Initialize the workspace.
(cd ./plan-15-controller/; terraform init;)
-
Perform a dry run.
(cd ./plan-15-controller/; terraform plan;)
-
Apply the plan.
(cd ./plan-15-controller/; terraform apply;)
-
Probe the Ziti ingresses. Swap in the following subdomains (substituting your parent zone) to the same
openssl
command you used to probe the console above. All server certificates were issued by the controller's edge signer CA. You can configure advanced PKI with Helm values.
-
ctrl.my-ziti-cluster.example.com:443
: control plane provided by the controller, consumed by routers -
client.my-ziti-cluster.example.com:443
: edge client API provided by the controller, consumed by routers and edge client SDKs -
management.my-ziti-cluster.example.com:443
: edge management API provided by the controller, consumed byziti
CLI, OpenZiti Console, and integrations like the Terraform Provider. -
router1-edge.my-ziti-cluster.example.com:443
: edge listener provided by router1, consumed by Edge SDKs connecting to OpenZiti Services. -
router1-transport.my-ziti-cluster.example.com:443
: link listener provided by router1, consumed by future routers you might addopenssl s_client -connect management.my-ziti-cluster.example.com:443 <>/dev/null \ |& openssl x509 -noout -text \ | grep -A1 "Subject Alternative Name" \ | sed -E 's/,?\s+(DNS|IP)/\n\t\t\1/g' X509v3 Subject Alternative Name: DNS:localhost DNS:ziti-controller DNS:ziti-controller-ctrl DNS:ziti-controller-ctrl.ziti DNS:ziti-controller-ctrl.ziti.svc DNS:ziti-controller-ctrl.ziti.svc.cluster DNS:ziti-controller-ctrl.ziti.svc.cluster.local DNS:ctrl.my-ziti-cluster.example.com DNS:ziti-controller-client DNS:ziti-controller-client.ziti DNS:ziti-controller-client.ziti.svc DNS:ziti-controller-client.ziti.svc.cluster DNS:ziti-controller-client.ziti.svc.cluster.local DNS:client.my-ziti-cluster.example.com DNS:ziti-controller-mgmt DNS:ziti-controller-mgmt.ziti DNS:ziti-controller-mgmt.ziti.svc DNS:ziti-controller-mgmt.ziti.svc.cluster DNS:ziti-controller-mgmt.ziti.svc.cluster.local DNS:management.my-ziti-cluster.example.com IP Address:127.0.0.1
-
Run
ziti
CLI remotely in the admin container. Change the command tobash
to log in interactively. Then runzitiLogin
.# find the pod name kubectl --namespace ziti get pods # exec in the controller pod kubectl --namespace ziti exec \ --stdin --tty \ ziti-controller-6c79575bb4-lh9nt \ --container ziti-controller-admin -- \ bash -c ' zitiLogin &>/dev/null; ziti edge list ers --output-json' \ | jq --slurp
-
Login
ziti
CLI locally.kubectl get secrets "ziti-controller-admin-secret" \ --namespace ziti \ --output go-template='{{index .data "admin-password" | base64decode }}' \ | xargs ziti edge login management.my-ziti-cluster.example.com:443 \ --yes \ --username admin \ --password
-
Forward local port 1280/tcp to the Ziti management API.
You have the option to delete the management API Ingress and access it through a kubectl port forward or an OpenZiti service. The OpenZiti Service address created by the services plan will match one a DNS SANs on the controller's server certificate: "mgmt.ziti."
kubectl port-forward services/ziti-controller-mgmt \ --namespace ziti \ 1280:443
-
Login
ziti
CLI with the port forward.kubectl get secrets "ziti-controller-admin-secret" \ --namespace ziti \ --output go-template='{{index .data "admin-password" | base64decode }}' \ | xargs ziti edge login localhost:1280 \ --yes \ --username admin \ --password
-
Save the management API spec. If you're automating or integrating the API, it's a good idea to reference your running controller's built-in spec to ensure compatibility.
curl -sk https://localhost:1280/edge/management/v1/swagger.json \ | tee /tmp/swagger.json
-
Visit the Management API reference in a web browser. https://localhost:1280/edge/management/v1/docs
This plan will deploy an OpenZiti Router. The main reason it's separate from the first plan is that the OpenZiti Terraform Provider gets configuration input from the Kubernetes plan's TF state.
-
Initialize the workspace.
(cd ./plan-20-router/; terraform init;)
-
Perform a dry run.
(cd ./plan-20-router/; terraform plan;)
-
Apply the plan.
(cd ./plan-20-router/; terraform apply;)
The router will appear "online=true" in a minute or two.
This plan will create some OpenZiti Services:
-
a demo API running
httpbin
-
the OpenZiti Management API as an OpenZiti Service (so you can firewall it off later if you like)
-
the Kubernetes apiserver
-
Initialize the workspace.
(cd ./plan-30-services/; terraform init;)
-
Perform a dry run.
(cd ./plan-30-services/; terraform plan;)
-
Apply the plan.
(cd ./plan-30-services/; terraform apply;)
This plan will help you get started using OpenZiti Services with a Tunneler.
-
Initialize the workspace.
(cd ./plan-40-client/; terraform init;)
-
Perform a dry run.
(cd ./plan-40-client/; terraform plan;)
-
Apply the plan.
(cd ./plan-40-client/; terraform apply;)
-
Add the demo client identity to Ziti Desktop Edge. The JWT is saved in
/tmp/edge-client1.jwt
. -
Test the demo API.
curl -sSf -XPOST -d ziti=awesome http://testapi.ziti/post | jq .data
-
Use the Kubernetes API over Ziti
Terraform modified your Kubeconfig to have a new Ziti context named like "ziti-lke12345-ctx" pointing to the Ziti service for the Kubernetes apiserver instead of the public Linode service. Find the name and select it with
kubectl
.$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * lke95021-ctx lke95021 lke95021-admin default ziti-lke95021-ctx ziti-lke95021-ctx lke95021-admin $ kubectl --context ziti-lke12345-ctx cluster-info Kubernetes control plane is running at https://kubernetes.default.svc KubeDNS is running at https://kubernetes.default.svc/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Destroy the Terraform Plans in reverse order if you plan on re-using the directories. That way you won't have any problems with leftover Terraform State.
You can tear down the cluster like this.
(cd ./plan-40-client/; terraform destroy;)
(cd ./plan-30-services/; terraform destroy;)
(cd ./plan-20-router/; terraform destroy;)
(cd ./plan-15-controller/; terraform destroy;)
(cd ./plan-10-k8s/; terraform destroy;)
In your Tunneler, i.e. Desktop Edge, remember to forget your identity. The OpenZiti Identity named "edge-client" in your Tunneler, that is.
If you uninstall ingress-nginx
then your LoadBalancer public IP is released. You'll get a new one if you reinstall ingress-nginx
with the first Kubernetes TF plan, but you'll have to wait for DNS to re-propagate.