This project demonstrates using Consul Envoy extensions with Consul on Kubernetes:
- OPA integration via
builtin/ext-authz
- Wasm integration via
builtin/wasm
consul
>=v1.16.0
consul-k8s
>=v1.2.0
docker
jq
kind
Create k8s cluster
This command executes a script that runs a local docker registry and creates a kind
cluster that can pull images from that registry.
This can be useful during development or debugging if you need to use a local dev image for any of the containers.
The local registry is not required when using published container images.
./kind-with-rgy
kubectl create namespace pci-backend
kubectl create namespace pci-frontend
Build Consul Enterprise
This step is only necessary while work is in progress and there is a need to be able to pull dev images. This is unnecessary when using published container images.
cd $HOME/github.com/hashicorp/consul-enterprise/
make dev-docker
docker tag consul-dev localhost:5000/consul-dev-ent
docker push localhost:5000/consul-dev-ent
cd -
or
( cd ~/github.com/hashicorp/consul-enterprise/ && make dev-docker && docker tag consul-dev localhost:5000/consul-dev-ent && docker push localhost:5000/consul-dev-ent )
Setup Secrets
rm -rf secrets
mkdir secrets
echo -n "1111111-2222-3333-4444-555555555555" > secrets/root-token.txt
echo -n "$(consul keygen)" > secrets/gossip-key.txt
(cd secrets ; consul tls ca create)
chmod 600 secrets/*
Create a k8s secret for the Consul data
kubectl create secret generic consul \
--from-file=root-token=secrets/root-token.txt \
--from-file=gossip-key=secrets/gossip-key.txt \
--from-file=ca-cert=secrets/consul-agent-ca.pem \
--from-file=ca-key=secrets/consul-agent-ca-key.pem \
--from-file=enterprise-license=${CONSUL_LICENSE_PATH}
Configure Volume for Service Wasm and OPA Data
kubectl apply -f pv/pv.yaml
kubectl apply -f pv/pvc.yaml
Install Consul using `consul-k8s`
consul-k8s install -config-file values.yaml -namespace default -auto-approve
Configure the Environment
Use the env.sh
script.
. env.sh
Port Forwards
# Consul :8501
kubectl port-forward services/consul-server 8501
# web :9090
kubectl port-forward services/web 9090
# api Envoy admin :19000
kubectl port-forward pods/$API_APP 19000
Create PCI Namespaces
consul namespace create -name pci-backend
consul namespace create -name pci-frontend
Install the apps
kubectl apply -f apps/1-install/opa-config-map.yaml
kubectl apply -f apps/1-install/fs.yaml
kubectl apply -f apps/1-install/api.yaml
kubectl apply -f apps/1-install/web.yaml
kubectl apply -f apps/1-install/pci-backend-db.yaml -n pci-backend
kubectl apply -f apps/1-install/pci-frontend-web.yaml -n pci-frontend
kubectl apply -f apps/1-install/intentions.yaml
kubectl apply -f apps/1-install/service-defaults.yaml
Test out the apps
Browse to the Web app UI
# exec into the web pod to directly call the api pod
kubectl exec -it pod/$WEB_APP -c web -- ash
# good actor
curl -w "\nHTTP status: %{http_code}\n\n" 'http://api.default.svc.cluster.local/'
# bad actor requests - these will succeed
# attempted SQL injection
curl -w "\nHTTP status: %{http_code}\n\n" 'http://api.default.svc.cluster.local/' -d'1%27%20ORDER%20BY%203--%2B'
# attempted JS injection
curl -w "\nHTTP status: %{http_code}\n\n" 'http://api.default.svc.cluster.local/?arg=<script>alert(0)</script>'
Get the Coraza WAF
curl -sSL -o extensions/tmp.zip https://github.com/corazawaf/coraza-proxy-wasm/releases/download/0.1.0/coraza-proxy-wasm-0.1.0.zip \
&& unzip -o extensions/tmp.zip -d extensions \
&& rm extensions/tmp.zip \
&& sha256sum extensions/coraza-proxy-wasm.wasm
Copy the SHA256 checksum, we'll need it for the next step.
Add the Wasm Envoy extension to the `api` app
vi apps/2-wasm/service-defaults.yaml
kubectl apply -f apps/2-wasm/service-defaults.yaml
Demo WAF in the data path
# exec into the web pod to directly call the api pod
kubectl exec -it pod/$WEB_APP -c web -- ash
# good actor
curl 'http://api.default.svc.cluster.local/'
# bad actor requests - will be rejected by WAF with 403 Forbidden
# attempted SQL injection
curl -w "\nHTTP status: %{http_code}\n\n" 'http://api.default.svc.cluster.local/' -d'1%27%20ORDER%20BY%203--%2B'
# attempted JS injection
curl -w "\nHTTP status: %{http_code}\n\n" 'http://api.default.svc.cluster.local/?arg=<script>alert(0)</script>'
Deploy OPA agent sidecar to the `api` app
vimdiff apps/1-install/api.yaml apps/3-opa/api.yaml
kubectl apply -f apps/3-opa/api.yaml
. env.sh
kubectl describe pods/${API_APP}
Apply the `ext_authz` extension to enable OPA
vi apps/3-opa/service-defaults.yaml
kubectl apply -f apps/3-opa/service-defaults.yaml
Test OPA policy enforcement
Ensure that we're only permitted to read (method = GET
) resources from the api
during the hours of the demo.
Writes (method = POST
) are not permitted.
vi extensions/policy.rego
# reads OK
curl 'http://api.default.svc.cluster.local/'
curl 'http://api.default.svc.cluster.local/admin'
# write NOK to /admin
curl -w "\nHTTP status: %{http_code}\n\n" -XPOST 'http://api.default.svc.cluster.local/admin'
web can access pci-backend-db with an intention
vi apps/3-opa/pci-intentions.yaml
kubectl apply -f apps/3-opa/pci-intentions.yaml
curl 'http://pci-backend-db.pci-backend.svc.cluster.local/'
Use OPA policy to enforce PCI
Only applications in the pci-frontend
and pci-backend
namespaces can talk to pci-backend
services.
vi extensions/pci.rego
vi apps/3-opa/pci-service-defaults.yaml
kubectl apply -f apps/3-opa/pci-service-defaults.yaml
Now web
can't talk to pci-backend-db
because it is in the default
partition.
curl -w "\nHTTP status: %{http_code}\n\n" 'http://pci-backend-db.pci-backend.svc.cluster.local/'
Extension Stats
# Wasm
curl -s localhost:19000/stats/prometheus | grep waf_filter
curl -s localhost:19000/stats | grep wasm
# OPA
curl -s localhost:19000/stats | grep 'ext_authz\.response'
# TODO create pretty jq filter for grok'ing opa logs
# Nice to have: a `result` object describing the "why" for a response
Envoy filters
curl -sS localhost:19000/config_dump | jq --raw-output '.configs[2].dynamic_listeners[] | .active_state.listener.filter_chains[].filters[] | select(.name == "envoy.filters.network.http_connection_manager") | .typed_config.http_filters[] | select(.name == "envoy.filters.http.ext_authz")'
curl -sS localhost:19000/config_dump | jq --raw-output '.configs[2].dynamic_listeners[] | .active_state.listener.filter_chains[].filters[] | select(.name == "envoy.filters.network.http_connection_manager") | .typed_config.http_filters[] | select(.name == "envoy.filters.http.wasm")'
Error Categories
There are two general categories of errors related to the Envoy extensions:
- Configuration errors - These are errors that Consul can catch through validation when the configuration entry is applied. Configuration errors result in a log message and API response. The config entry will not be written and the proxy will not be updated.
- Symptom: Config entry is not applied.
- Diagnosis: Consul responds with a detailed error message on write.
- Consul CLI/HTTP API: Error is written to the console.
- CRD: Error shows up in the resource:
kubectl describe service-defaults/<name>
- Solution: Fix the configuration, as instructed, and reapply.
- Runtime errors - These are errors that Consul can only catch at runtime when it attempts to apply the Envoy extension to the proxy. At this point the config entry has been written but there is a problem applying the configuration. This will result in a log message and the configuration will not be applied to the proxy. For example, if the extension requires an upstream but no upstream is defined. These are more difficult to notice and diagnose.
- Symptom: Configuration is written but the service is not behaving as expected.
- Diagnosis:
- Consul logs - Check the logs on the Consul agent. Look for errors relating to extensions.
consul-dataplane
logs - Check the logs on theconsul-dataplane
. Look for Envoy errors relating to the extensions.- Dump the Envoy config - Dump the Envoy configuration and check for the expected filter.
- Solution: Fix the error and reapply the configuration.
Reset to base install
kubectl apply -f apps/1-install/service-defaults.yaml
Configuration Errors
vi apps/4-troubleshooting/config-err-service-defaults.hcl
# Consul CLI
consul config write apps/4-troubleshooting/config-err-service-defaults.hcl
# K8s CRD
kubectl apply -f apps/4-troubleshooting/config-err-service-defaults.yaml
kubectl describe service-defaults/api
Runtime Errors
vi apps/4-troubleshooting/runtime-err-service-defaults.yaml
kubectl apply -f apps/4-troubleshooting/runtime-err-service-defaults.yaml
kubectl describe service-defaults/api # resource sync'd w/o errors
# Diagnosis
## Try some curl commands from `web`
## If the extensions are working correctly, we expect 403 responses
## Spoiler, we're going to get 200 responses
## May need to try a couple of times to clear the response cache
curl -w "\n%{http_code}\n" 'http://api.default.svc.cluster.local/' -d'1%27%20ORDER%20BY%203--%2B'
curl -XPOST 'http://api.default.svc.cluster.local/'
# dump the envoy config for the HTTP filters on the `public_listener`
curl -sS localhost:19000/config_dump | jq --raw-output '.configs[2].dynamic_listeners[] | .active_state.listener.filter_chains[].filters[] | select(.name == "envoy.filters.network.http_connection_manager") | select(.typed_config.route_config.name == "public_listener") | .typed_config.http_filters'
# Consul server logs
kubectl logs consul-server-0
# `consul-dataplane` logs
kubectl logs $API_APP -c consul-dataplane
Restore valid configuration
kubectl apply -f apps/4-troubleshooting/service-defaults.yaml
Clean up the demo
kind delete cluster
docker stop kind-registry