-
Notifications
You must be signed in to change notification settings - Fork 27
Local development with Kubernetes
This page describes how to build and deploy Corda 5 to a local Kubernetes cluster for the purposes of Corda development.
To create a cluster, do one of the following:
- macOS and Windows users — follow the installation for Docker Desktop.
- Linux users — follow the installation for minikube.
- Install Docker Desktop.
- Enable Kubernetes in Preferences.
- Configure your Kubernetes cluster with at least 6 CPU and 8 GB RAM.
- For macOS, configure the resources in the Docker Desktop Preferences.
- For Windows, configure the WSL settings in the .wslconfig file.
- Install minikube.
- Start minikube with 8 GB memory and 6 CPUs:
minikube start --memory 8000 --cpus 6
- You may additionally want to add (and/or check you haven’t already aliased kubectl):
alias kubectl="minikube kubectl --"
- Activate CLI completion
# kubectl completion --help source <(kubectl completion bash)
- Install the Helm CLI.
- Activate CLI completion
# helm completion --help source <(helm completion bash)
If you have multiple Kubernetes clusters, ensure that you are targeting the correct context. You can list contexts you have defined with:
kubectl config get-contexts
The current context is marked with an asterisk. You can switch context, for example:
kubectl config use-context docker-desktop
If you are using Docker Desktop, you can also switch context via the Kubernetes sub-menu.
To create a namespace to contain your Corda deployment:
kubectl create namespace corda
The commands that follow all explicitly specify the namespace to use. However, you can reduce the length of your commands by switching the Kubernetes context to use the newly created namespace:
kubectl config set-context --current --namespace=corda
Install the kubectx and kubens tools for an easy way to switch context and namespace from the command line.
Corda requires a PostgreSQL and Kafka instance as pre-requisites. One option to obtain these in a non-production environment is via the umbrella Helm chart in the corda/corda-dev-helm
GitHub repository. Clone the repository, change to that directory, and execute the following commands:
Bash:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm dependency build charts/corda-dev
helm upgrade --install prereqs -n corda \
charts/corda-dev \
--render-subchart-notes \
--timeout 10m \
--wait
PowerShell:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm dependency build charts/corda-dev
helm upgrade --install prereqs -n corda `
charts/corda-dev `
--render-subchart-notes `
--timeout 10m `
--wait
The helm repo add
and helm dependency build
commands pull the child Kafka and PostgreSQL charts provided by Bitnami.
On the helm install
command, the --wait
option ensures all of the pods are ready before returning and the --render-subchart-notes
option gives you a brief overview of the connection details. The timeout is set to 10 minutes to allow time to pull the images. The process should take significantly less time than this on subsequent installs.
If you’re using minikube, configure your shell to use the Docker daemon inside minikube so that built images are available directly to the cluster:
Bash:
eval $(minikube docker-env)
PowerShell:
minikube docker-env --shell=powershell | Invoke-Expression
Assuming you are in the top level of the cloned repo corda/corda-runtime-os
, run the following command to rebuild all of the images:
./gradlew publishOSGiImage
-
There is a
values.yaml
file at the root of thecorda-runtime-os
repository that overrides the default values in the Corda Helm chart. These values configure the chart to use the images you just built and specify the location of the Kafka and PostgreSQL instances created by thecorda-dev
Helm chart. They also set the initial admin user password toadmin
as currently required for the end-to-end tests to pass. -
Install the chart as follows by running from the root of the
corda-runtime-os
repository:Bash:
helm install corda -n corda \ charts/corda \ --values values.yaml \ --wait
PowerShell:
helm install corda -n corda ` charts/corda ` --values values.yaml ` --wait
If the install times out, it indicates that not all of the worker pods reached ready state. Use the following command to list the pods and their current state:
kubectl get pods -n corda
If a particular pod is failing to start, run the following command to get more details using the name of the pod from the previous output:
kubectl describe pod -n corda corda-rpc-worker-8f9f5565-wkzgq
If the pod is continually restarting, it is likely that Kubernetes is killing it because it does not reach a healthy state. Check the pod logs, for example:
kubectl logs -n corda corda-rpc-worker-8f9f5565-wkzgq
For more information about these commands, see View worker logs.
To follow the logs for a specific worker pod:
kubectl logs -f -n corda corda-rpc-worker-69f9dbcc97-ndllq
To retrieve a list of the pods:
kubectl get pods -n corda
To enable command completion and allow tab-completion of the pod name:
kubectl completion -h
You can also view the logs for all pods for a deployment. This has the advantage that the name does not change from one release to the next. For example:
kubectl logs -f -n corda deploy/corda-rpc-worker
To get a list of all deployments:
kubectl get deployments -n corda
To follow the logs for all pods in the release, use labels:
kubectl logs -f -n corda -l app.kubernetes.io/instance=corda --prefix=true
For more power (and color), install stern.
If you are using minikube, you can use the following command to display the Kubernetes dashboard and then navigate to the logs via Namespaces > Pods > Pod logs:
minikube dashboard --url
-
To access the RPC endpoint, forward the port to
localhost:8888
by running one of these commands:Bash:
kubectl port-forward -n corda deploy/corda-rpc-worker 8888 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda deploy/corda-rpc-worker 8888}
-
A custom password may be assigned to the intial
admin
user. It can be retrieved as follows:Bash:
kubectl get secret corda-initial-admin-user -n corda \ -o go-template='{{ index .data "password" | base64decode }}'
PowerShell:
kubectl get secret corda-initial-admin-user -n corda ` -o go-template='{{ index .data \"password\" | base64decode }}'
-
From the root directory of the
corda/corda-runtime-os
repository, run this Gradle task to execute the E2E tests:./gradlew :applications:workers:release:rpc-worker:e2eTest
To make a change to a single worker image, you can redeploy the worker without recreating the entire installation. For example, to rebuild the RPC worker image:
-
Run this command:
./gradlew :applications:workers:release:rpc-worker:publishOSGiImage
-
List the pods (as described in View worker logs and then use the name of the current RPC worker pod to kill it. For example:
kubectl delete pod -n corda corda-rpc-worker-69f9dbcc97-ndllq
When Kubernetes restarts the pod, it picks up the newly built Docker image.
This example shows how to connect the IntelliJ debugger to the corda-rpc-worker
pod.
By default, debug is not enabled for any of the pods. You must also configure Corda to only create a single replica of the worker to guarantee that work is handled by the pod you are attached to.
-
There is a
debug.yaml
file in the root of thecorda-runtime-os
repository. Uncomment the lines to enable debugging for the worker you are interested in. For example:workers: rpc: replicaCount: 1 debug: enabled: true
-
(Re)install the Helm chart specifying both the
values.yaml
anddebug.yaml
as follows:Bash:
helm upgrade --install corda -n corda \ charts/corda \ --values values.yaml \ --values debug.yaml \ --wait
PowerShell:
helm upgrade --install corda -n corda ` charts/corda ` --values values.yaml ` --values debug.yaml ` --wait
-
Expose port 5005 from the pod to localhost:
Bash:
kubectl port-forward -n corda deploy/corda-rpc-worker 5005 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda deploy/corda-rpc-worker 5005}
This command uses the name of the deployment as, unlike the pod name, it stays the same between one Helm release and the next. It does, however, just pick one pod in the deployment at random and attach the debugger to that. That is not an issues in this example as we have configured the number of replicas as 1.
-
To connect IntelliJ to the debug port:
a. Click Run > Edit Configurations.
The Run/Debug configurations window is displayed.
b. Click the plus (+) symbol and select Remote JVM Debug.
c. Enter a Name and Port Number.
d. Click OK.
Note: To permit debugging without restarting the process, startup, liveness, and readiness probes are disabled when debug is enabled.
IntelliJ users may also be interested in the Cloud Code plugin, which enables you to interact with Kubernetes without leaving your IDE.
Use the debug.yaml
file in the root of the corda-runtime-os
repository when installing the Helm chart:
Bash:
helm upgrade --install corda -n corda \
charts/corda \
--values values.yaml \
--values debug.yaml \
--wait
PowerShell:
helm upgrade --install corda -n corda `
charts/corda `
--values values.yaml `
--values debug.yaml `
--wait
Note: The verification impacts performance and can be turned off, while still using the content of debug.yaml
by setting the flow.verifyInstrumentation
property to false
or removing it entirely.
To connect to the cluster DB from tooling on your local environment, do the following:
-
Port forward the PostgreSQL pod. For example:
Bash:
kubectl port-forward -n corda statefulset/prereqs-postgresql 5434:5432 &
PowerShell:
Start-Job -ScriptBlock {kubectl port-forward -n corda statefulset/prereqs-postgresql 5434:5432}
-
Fetch the superuser’s password from the Kubernetes secret:
Bash:
kubectl get secret prereqs-postgresql -n corda \ -o go-template='{{ index .data "postgres-password" | base64decode }}'
PowerShell:
kubectl get secret prereqs-postgresql -n corda ` -o go-template='{{ index .data \"postgres-password\" | base64decode }}'
-
Connect to the DB using your preferred database administration tool with the following properties:
- Host —
localhost
- Port —
5434
- Database —
cordacluster
- User —
postgres
- Password — as determined above
- Host —
If using Telepresence, you do not require the port forwarding; simply connect using the hostname prereqs-postgresql.corda
.
This example connects a Kafka client from outside the cluster, to Kafka running under Kubernetes.
You cannot simply port forward from the Kafka pods to localhost as the bootstrap servers would advertise the internal cluster addresses. We suggest one of the following options:
- Use the Telepresence tool to route traffic from your local machine to the Kubernetes cluster using the internal cluster addresses.
- Reconfigure Kafka to expose an external listener.
-
Install telepresence
-
Connect Telepresence and check you are connected to the right cluster:
telepresence connect telepresence status
Kafka can now be accessed using the Kubernetes service name prereqs-kafka.corda
. For example, to create a test connection from a Kafka producer outside the cluster:
kafka-console-producer --bootstrap-server prereqs-kafka.corda:9092 --topic test
To stop the Telepresence connection:
telepresence quit
-
Create a
expose.yaml
file containing the necessary configuration for the Kafka sub-chart to expose an external listener:kafka: externalAccess: enabled: true service: type: LoadBalancer ports: external: 9094 autoDiscovery: enabled: true serviceAccount: create: true rbac: create: true
-
If using minikube with the Docker driver, run the following command in another terminal to allow resolution of the IP address for the load balancer:
minikube tunnel
-
Install/upgrade the prereqs Helm release with the new overrides, from your
corda/corda-dev-helm
directory:Bash:
helm upgrade --install prereqs -n corda \ charts/corda-dev \ --values expose.yaml \ --render-subchart-notes \ --timeout 10m \ --wait
PowerShell:
helm install prereqs -n corda ` charts/corda-dev ` --values expose.yaml ` --render-subchart-notes ` --timeout 10m ` --wait
For minikube and Kubernetes under Docker Desktop, you should now be able to access Kafka via the bootstrap server localhost:9094
. For example, to create a test connection from a Kafka producer outside the cluster, run this command:
kafka-console-producer --bootstrap-server localhost:9094 --topic test
Kafdrop provides an (insecure) web-UI for browsing the contents of a Kafka cluster. In order to deploy kafdrop you'll need to git clone the Kafdrop repo, change into that directory, and make Corda your default namespace before running the command to deploy the container:
git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop
kubectl config set-context --current --namespace=corda
helm upgrade --install kafdrop chart --set kafka.brokerConnect=prereqs-kafka:9092 --set kafka.properties="$(echo -e "security.protocol=SSL\nssl.truststore.type=PEM" | base64)" --set kafka.truststore="$(kubectl get secret prereqs-kafka-0-tls -o go-template='{{ index .data "ca.crt" }}')" -n corda
Now port forward that container to be able to connect to Kafdrop on localhost, If using telepresence then you'll not need this step.
kubectl port-forward -n corda svc/kafdrop 9000:9000 &
You should now be able to connect to Kafdrop on http://localhost:9000/.
The quickest route to clean up is to delete the entire Kubernetes namespace:
kubectl delete ns corda
Alternatively, you can clean up the Helm releases, pre-install jobs, and the persistent volumes created by the pre-requisites as follows:
helm delete corda -n corda
helm delete prereqs -n corda
kubectl delete job --all -n corda
kubectl delete pvc --all -n corda
Usually the above delete pvc
command also deletes the persistent volumes, but not always. You can check with:
kubectl get pv
You may have to delete some volumes explicitly. Assuming that this is the only K8S cluster you are running, you can delete all persistent volumes with this command. Only run this command if you are sure you want to delete all volumes.
kubectl delete pv --all
- Cloud Code plugin for Kubernetes in IntelliJ
- stern for following logs in multiple containers
- kubectx and kubens for switching Kubernetes context and namespace
- Lens for a shiny UI for interacting with your cluster
- k9s for a shiny CLI for interacting with your cluster