From 26ecd4f05a7ac4801bf0a498b31051b0626951e6 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 8 Apr 2024 19:13:27 +0200 Subject: [PATCH] docs(self-managed): Upgrade instructions for Helm chart 10.0.0 - Camunda 8.5 (#3530) * tidy up * Update upgrade.md Signed-off-by: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> --------- Signed-off-by: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Co-authored-by: christinaausley <84338309+christinaausley@users.noreply.github.com> --- .../troubleshooting/troubleshooting.md | 4 +- .../update-guide/840-to-850.md | 2 +- .../deploy/amazon/amazon-eks/dual-region.md | 36 +- .../deploy/amazon/amazon-eks/eks-helm.md | 32 +- .../setup/deploy/amazon/amazon-eks/irsa.md | 33 +- .../setup/deploy/amazon/aws-marketplace.md | 63 +- .../setup/deploy/azure/microsoft-aks.md | 2 +- .../deploy/local/local-kubernetes-cluster.md | 14 +- .../self-managed/setup/deploy/local/manual.md | 18 +- .../deploy/openshift/redhat-openshift.md | 120 ++-- .../self-managed/setup/deploy/other/docker.md | 8 +- .../setup/guides/air-gapped-installation.md | 48 +- .../setup/guides/ingress-setup.md | 32 +- .../guides/multi-namespace-deployment.md | 24 +- docs/self-managed/setup/guides/upgrade.md | 625 ------------------ .../setup/guides/using-existing-keycloak.md | 5 +- docs/self-managed/setup/install.md | 26 +- docs/self-managed/setup/upgrade.md | 278 ++++---- .../platforms/microsoft-aks.md | 2 +- .../platform-deployment/troubleshooting.md | 4 +- .../platforms/microsoft-aks.md | 2 +- .../platform-deployment/troubleshooting.md | 4 +- .../platforms/microsoft-aks.md | 2 +- .../platform-deployment/troubleshooting.md | 4 +- .../platforms/microsoft-aks.md | 2 +- .../platform-deployment/troubleshooting.md | 4 +- 26 files changed, 367 insertions(+), 1027 deletions(-) delete mode 100644 docs/self-managed/setup/guides/upgrade.md diff --git a/docs/self-managed/operational-guides/troubleshooting/troubleshooting.md b/docs/self-managed/operational-guides/troubleshooting/troubleshooting.md index 68e8cb6e530..627d4a872c5 100644 --- a/docs/self-managed/operational-guides/troubleshooting/troubleshooting.md +++ b/docs/self-managed/operational-guides/troubleshooting/troubleshooting.md @@ -34,10 +34,10 @@ global: ## Zeebe Ingress (gRPC) -Zeebe requires an Ingress controller that supports `gRPC` which is built on top of `HTTP/2` transport layer. Therefore, to expose Zeebe-Gateway externally, you need the following: +Zeebe requires an Ingress controller that supports `gRPC` which is built on top of `HTTP/2` transport layer. Therefore, to expose Zeebe Gateway externally, you need the following: 1. An Ingress controller that supports `gRPC` ([ingress-nginx controller](https://github.com/kubernetes/ingress-nginx) supports it out of the box). -2. TLS (HTTPS) via [Application-Layer Protocol Negotiation (ALPN)](https://www.rfc-editor.org/rfc/rfc7301.html) enabled in the Zeebe-Gateway Ingress object. +2. TLS (HTTPS) via [Application-Layer Protocol Negotiation (ALPN)](https://www.rfc-editor.org/rfc/rfc7301.html) enabled in the Zeebe Gateway Ingress object. However, according to the official Kubernetes documentation about [Ingress TLS](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls): diff --git a/docs/self-managed/operational-guides/update-guide/840-to-850.md b/docs/self-managed/operational-guides/update-guide/840-to-850.md index 7d996891129..1149ea472b7 100644 --- a/docs/self-managed/operational-guides/update-guide/840-to-850.md +++ b/docs/self-managed/operational-guides/update-guide/840-to-850.md @@ -9,7 +9,7 @@ The following sections explain which adjustments must be made to migrate from Ca ## Helm chart :::caution Breaking changes -The Camunda Helm chart v10.0.0 that comes with Camunda 8.5 has major changes in the values file structure. Update the values keys before starting the chart upgrade. +The Camunda Helm chart v10.0.0 has major changes in the values file structure. Follow the upgrade steps for each component before starting the chart upgrade. ::: Carefully follow the [upgrade instructions](/self-managed/setup/upgrade.md#v1000) to upgrade from Camunda Helm chart v9.x.x to Camunda Helm chart v10.x.x. diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md index 992403bf135..b7c56c34950 100644 --- a/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md +++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md @@ -55,7 +55,7 @@ Additionally, it is recommended to manifest those changes for future interaction 1. Git clone or fork the repository [c8-multi-region](https://github.com/camunda/c8-multi-region): -```bash +```shell git clone https://github.com/camunda/c8-multi-region.git ``` @@ -75,13 +75,13 @@ In addition to namespaces for Camunda installations, create the namespaces for f 4. Execute the script via the following command: -```bash +```shell . ./export_environment_prerequisites.sh ``` The dot is required to export those variables to your shell and not a spawned subshell. -```bash reference +```shell https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/export_environment_prerequisites.sh ``` @@ -201,7 +201,7 @@ To ease working with two clusters, create or update your local `kubeconfig` to c Update or create your kubeconfig via the [AWS CLI](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html): -```bash +```shell # the alias allows for easier context switching in kubectl aws eks --region $REGION_0 update-kubeconfig --name $CLUSTER_0 --alias $CLUSTER_0 aws eks --region $REGION_1 update-kubeconfig --name $CLUSTER_1 --alias $CLUSTER_1 @@ -221,14 +221,14 @@ You are configuring the CoreDNS from the cluster in **Region 0** to resolve cert 1. Expose `kube-dns`, the in-cluster DNS resolver via an internal load-balancer in each cluster: -```bash +```shell kubectl --context $CLUSTER_0 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml kubectl --context $CLUSTER_1 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml ``` 2. Execute the script [generate_core_dns_entry.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_core_dns_entry.sh) in the folder `aws/dual-region/scripts/` of the repository to help you generate the CoreDNS config. Make sure that you have previously exported the [environment prerequisites](#environment-prerequisites) since the script builds on top of it. -```bash +```shell ./generate_core_dns_entry.sh ``` @@ -243,7 +243,7 @@ kubectl --context $CLUSTER_1 apply -f https://raw.githubusercontent.com/camunda/ For illustration purposes only. These values will not work in your environment. ::: -```bash +```shell ./generate_core_dns_entry.sh Please copy the following between ### Cluster 0 - Start ### and ### Cluster 0 - End ### @@ -354,7 +354,7 @@ data: 5. Check that CoreDNS has reloaded for the changes to take effect before continuing. Make sure it contains `Reloading complete`: -```bash +```shell kubectl --context $CLUSTER_0 logs -f deployment/coredns -n kube-system kubectl --context $CLUSTER_1 logs -f deployment/coredns -n kube-system ``` @@ -365,7 +365,7 @@ The script [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blo 1. Execute the [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/test_dns_chaining.sh). Make sure you have previously exported the [environment prerequisites](#environment-prerequisites) as the script builds on top of it. -```bash +```shell ./test_dns_chaining.sh ``` @@ -381,20 +381,20 @@ You can pull the data from Terraform since you exposed those via `output.tf`. 1. From the Terraform code location `aws/dual-region/terraform`, execute the following to export the access keys to environment variables. This will allow an easier creation of the Kubernetes secret via the command line: -```bash +```shell export AWS_ACCESS_KEY_ES=$(terraform output -raw s3_aws_access_key) export AWS_SECRET_ACCESS_KEY_ES=$(terraform output -raw s3_aws_secret_access_key) ``` 2. From the folder `aws/dual-region/scripts` of the repository, execute the script [create_elasticsearch_secrets.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/create_elasticsearch_secrets.sh). This will use the exported environment variables from **Step 1** to create the required secret within the Camunda namespaces. Those have previously been defined and exported via the [environment prerequisites](#environment-prerequisites). -```bash +```shell ./create_elasticsearch_secrets.sh ``` 3. Unset environment variables to reduce the risk of potential exposure. The script is spawned in a subshell and can't modify the environment variables without extra workarounds: -```bash +```shell unset AWS_ACCESS_KEY_ES unset AWS_SECRET_ACCESS_KEY_ES ``` @@ -462,7 +462,7 @@ The base `camunda-values.yml` in `aws/dual-region/kubernetes` requires adjustmen 1. The bash script [generate_zeebe_helm_values.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_zeebe_helm_values.sh) in the repository folder `aws/dual-region/scripts/` helps generate those values. You only have to copy and replace them within the base `camunda-values.yml`. It will use the exported environment variables of the [environment prerequisites](#environment-prerequisites) for namespaces and regions. -```bash +```shell ./generate_zeebe_helm_values.sh # It will ask you to provide the following values @@ -478,7 +478,7 @@ The base `camunda-values.yml` in `aws/dual-region/kubernetes` requires adjustmen For illustration purposes only. These values will not work in your environment. ::: -```bash +```shell ./generate_zeebe_helm_values.sh Enter Zeebe cluster size (total number of Zeebe brokers in both Kubernetes clusters): 8 @@ -507,7 +507,7 @@ Use the following to set the environment variable ZEEBE_BROKER_EXPORTERS_ELASTIC From the terminal context of `aws/dual-region/kubernetes`, execute the following: -```bash +```shell helm install $HELM_RELEASE_NAME camunda/camunda-platform \ --version $HELM_CHART_VERSION \ --kube-context $CLUSTER_0 \ @@ -527,13 +527,13 @@ helm install $HELM_RELEASE_NAME camunda/camunda-platform \ 1. Open a terminal and port-forward the Zeebe Gateway via `kubectl` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster. -```bash +```shell kubectl --context "$CLUSTER_0" -n $CAMUNDA_NAMESPACE_0 port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 26500:26500 ``` 2. Open another terminal and use [zbctl](../../../../../apis-tools/cli-client/cli-get-started.md) to print the Zeebe cluster status: -```bash +```shell zbctl status --insecure --address localhost:26500 ``` @@ -543,7 +543,7 @@ zbctl status --insecure --address localhost:26500 Example output -```bash +```shell Cluster size: 8 Partitions count: 8 Replication factor: 4 diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md index 3e893894f22..06f51fca684 100644 --- a/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md +++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md @@ -183,7 +183,7 @@ The following makes use of the [combined ingress setup](/self-managed/setup/guid :::warning -Publicly exposing the Zeebe Gateway without authorization enabled can lead to severe security risks. Consider disabling the ingress for the Zeebe Gateway by setting the `zeebe-gateway.ingress.enabled` to `false`. +Publicly exposing the Zeebe Gateway without authorization enabled can lead to severe security risks. Consider disabling the ingress for the Zeebe Gateway by setting the `zeebeGateway.ingress.enabled` to `false`. By default, authorization is enabled to ensure secure access to Zeebe. Typically, only internal components need direct access, making it unnecessary to expose Zeebe externally. @@ -196,11 +196,11 @@ helm upgrade --install \ --version $CAMUNDA_HELM_CHART_VERSION \ --namespace camunda \ --create-namespace \ - --set identity.keycloak.postgresql.enabled=false \ - --set identity.keycloak.externalDatabase.host=$DB_HOST \ - --set identity.keycloak.externalDatabase.user=$PG_USERNAME \ - --set identity.keycloak.externalDatabase.password=$PG_PASSWORD \ - --set identity.keycloak.externalDatabase.database=$DEFAULT_DB_NAME \ + --set identityKeycloak.postgresql.enabled=false \ + --set identityKeycloak.externalDatabase.host=$DB_HOST \ + --set identityKeycloak.externalDatabase.user=$PG_USERNAME \ + --set identityKeycloak.externalDatabase.password=$PG_PASSWORD \ + --set identityKeycloak.externalDatabase.database=$DEFAULT_DB_NAME \ --set global.ingress.enabled=true \ --set global.ingress.host=$DOMAIN_NAME \ --set global.ingress.tls.enabled=true \ @@ -215,11 +215,11 @@ helm upgrade --install \ --set operate.contextPath="/operate" \ --set tasklist.contextPath="/tasklist" \ --set optimize.contextPath="/optimize" \ - --set zeebe-gateway.ingress.enabled=true \ - --set zeebe-gateway.ingress.host="zeebe.$DOMAIN_NAME" \ - --set zeebe-gateway.ingress.tls.enabled=true \ - --set zeebe-gateway.ingress.tls.secretName=zeebe-c8-tls \ - --set-string 'zeebe-gateway.ingress.annotations.kubernetes\.io\/tls-acme=true' + --set zeebeGateway.ingress.enabled=true \ + --set zeebeGateway.ingress.host="zeebe.$DOMAIN_NAME" \ + --set zeebeGateway.ingress.tls.enabled=true \ + --set zeebeGateway.ingress.tls.secretName=zeebe-c8-tls \ + --set-string 'zeebeGateway.ingress.annotations.kubernetes\.io\/tls-acme=true' ``` The annotation `kubernetes.io/tls-acme=true` is [interpreted by cert-manager](https://cert-manager.io/docs/usage/ingress/) and automatically results in the creation of the required certificate request, easing the setup. @@ -234,11 +234,11 @@ helm upgrade --install \ --version $CAMUNDA_HELM_CHART_VERSION \ --namespace camunda \ --create-namespace \ - --set identity.keycloak.postgresql.enabled=false \ - --set identity.keycloak.externalDatabase.host=$DB_HOST \ - --set identity.keycloak.externalDatabase.user=$PG_USERNAME \ - --set identity.keycloak.externalDatabase.password=$PG_PASSWORD \ - --set identity.keycloak.externalDatabase.database=$DEFAULT_DB_NAME + --set identityKeycloak.postgresql.enabled=false \ + --set identityKeycloak.externalDatabase.host=$DB_HOST \ + --set identityKeycloak.externalDatabase.user=$PG_USERNAME \ + --set identityKeycloak.externalDatabase.password=$PG_PASSWORD \ + --set identityKeycloak.externalDatabase.database=$DEFAULT_DB_NAME ``` diff --git a/docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md b/docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md index deee27abfe3..b1ed6ffc000 100644 --- a/docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md +++ b/docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md @@ -159,23 +159,22 @@ Don't forget to set the `serviceAccountName` of the deployment/statefulset to th For a Helm-based deployment, you can directly configure these settings using Helm values. Below is an example of how you can incorporate these settings into your Helm chart deployment: ```yaml -identity: - keycloak: - postgresql: - enabled: false - image: docker.io/camunda/keycloak:23 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags - extraEnvVars: - - name: KEYCLOAK_EXTRA_ARGS - value: "--db-driver=software.amazon.jdbc.Driver --transaction-xa-enabled=false --log-level=INFO,software.amazon.jdbc:INFO" - - name: KEYCLOAK_JDBC_PARAMS - value: "wrapperPlugins=iam" - - name: KEYCLOAK_JDBC_DRIVER - value: "aws-wrapper:postgresql" - externalDatabase: - host: "aurora.rds.your.domain" - port: 5432 - user: keycloak - database: keycloak +identityKeycloak: + postgresql: + enabled: false + image: docker.io/camunda/keycloak:23 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags + extraEnvVars: + - name: KEYCLOAK_EXTRA_ARGS + value: "--db-driver=software.amazon.jdbc.Driver --transaction-xa-enabled=false --log-level=INFO,software.amazon.jdbc:INFO" + - name: KEYCLOAK_JDBC_PARAMS + value: "wrapperPlugins=iam" + - name: KEYCLOAK_JDBC_DRIVER + value: "aws-wrapper:postgresql" + externalDatabase: + host: "aurora.rds.your.domain" + port: 5432 + user: keycloak + database: keycloak ``` :::note diff --git a/docs/self-managed/setup/deploy/amazon/aws-marketplace.md b/docs/self-managed/setup/deploy/amazon/aws-marketplace.md index 536205b6845..ab94f3ef99b 100644 --- a/docs/self-managed/setup/deploy/amazon/aws-marketplace.md +++ b/docs/self-managed/setup/deploy/amazon/aws-marketplace.md @@ -30,7 +30,7 @@ eks:ListAddons Lets start with exporting some environment variables: -``` +```shell export REGION= export CLUSTER_NAME= export CLUSTER_VERSION= @@ -47,7 +47,7 @@ We will use these variables for the rest of this guide, so use the same terminal 1. Create an EKS cluster. Save the following template to a file named `cluster_template.yaml`. You may fill out your desired values in this template manually or follow along to prefill some of these values with the environment variables set above. -``` +```yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig @@ -68,23 +68,23 @@ managedNodeGroups: attachPolicy: Version: "2012-10-17" Statement: - - Effect: Allow - Action: - - 'license-manager:CheckoutLicense' - Resource: '*' + - Effect: Allow + Action: + - "license-manager:CheckoutLicense" + Resource: "*" -availabilityZones: ['us-east-1a', 'us-east-1b'] +availabilityZones: ["us-east-1a", "us-east-1b"] ``` 2. The `availabilityZones` section needs to be manually replaced with your availability zones. Replace the variables marked with `$` or use the following command to replace the variables for you: -``` +```shell envsubst < cluster_template.yaml > cluster.yaml ``` This file is then run with the following command: -``` +```shell eksctl create cluster -f cluster.yaml ``` @@ -94,7 +94,7 @@ Expect this command to take around 20 minutes. The following `storageclass` is recommended for increased stability and write-speeds with Camunda. Save the following to a file named `ssd-storage-class-aws.yaml`: -``` +```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: @@ -106,7 +106,7 @@ volumeBindingMode: WaitForFirstConsumer Then, run the following: -``` +```shell kubectl apply -f ssd-storage-class-aws.yaml ``` @@ -114,21 +114,21 @@ The next command will set the `ssd storageclass` as the default storage class fo To set the default storage class to the `ssd storageclass`: -``` +```shell kubectl patch storageclass ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' ``` Next, we want to build an EBS CSI trust policy so the EKS cluster has the permissions to create `PersistentVolumes` with the new storage class: -``` +```shell export AWS_ACCOUNT_ID=$(aws sts get-caller-identity | grep Account | cut -d ':' -f 2 | tr -d ',' | grep -o "[0-9]*") export AWS_OIDC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) ``` Save this file as `ebs-csi-driver-trust-policy-template.json`: -``` +```json { "Version": "2012-10-17", "Statement": [ @@ -151,13 +151,13 @@ Save this file as `ebs-csi-driver-trust-policy-template.json`: Run the following to replace your OIDC ID and your AWS account ID with the environment variables: -``` +```shell envsubst < ebs-csi-driver-trust-policy-template.json > ebs-csi-driver-trust-policy.json ``` This command will create a role that permits your cluster to create persistent volumes: -``` +```shell aws iam create-role \ --role-name AmazonEKS_EBS_CSI_DriverRole_Cluster_$CLUSTER_NAME \ --assume-role-policy-document file://"ebs-csi-driver-trust-policy.json"; @@ -165,13 +165,13 @@ aws iam create-role \ Wait for 20 seconds: -``` +```shell sleep 20 ``` Now, attach a policy with those permissions to the role you just created: -``` +```shell aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --role-name AmazonEKS_EBS_CSI_DriverRole_Cluster_$CLUSTER_NAME @@ -179,7 +179,7 @@ aws iam attach-role-policy \ Create the AWS add-on for the EBS Driver and add it to the cluster: -``` +```shell aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name aws-ebs-csi-driver \ --service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole_Cluster_${CLUSTER_NAME} ``` @@ -193,19 +193,19 @@ kubectl annotate serviceaccount ebs-csi-controller-sa \ Restart the EBS CSI Controller so it refreshes the `serviceaccount`. -``` +```shell kubectl rollout restart deployment ebs-csi-controller -n kube-system ``` By default, the IAM OIDC Provider is not enabled. The following command will enable it. This allows the CSI driver to create volumes. See [this eksctl documentation](https://eksctl.io/usage/iamserviceaccounts/) for more information. -``` +```shell eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $REGION ``` ## Install ingress-nginx controller -``` +```shell helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install ingress-nginx ingress-nginx/ingress-nginx ``` @@ -214,7 +214,7 @@ helm install ingress-nginx ingress-nginx/ingress-nginx Save the following as `values_template.yaml`: -``` +```yaml # Chart values for the Camunda 8 Helm chart. # This file deliberately contains only the values that differ from the defaults. # For changes and documentation, use your favorite diff tool to compare it with: @@ -284,13 +284,13 @@ zeebeGateway: Then, run the following command to replace the template with the environment variables specified: -``` +```shell envsubst < values_template.yaml > values.yaml ``` Save this file as `values-aws.yaml`. This will ensure all images reference the ones hosted in AWS and do not require any extra credentials to access. -``` +```yaml global: image: registry: 709825985650.dkr.ecr.us-east-1.amazonaws.com @@ -324,13 +324,12 @@ identity: image: repository: camunda/camunda8/identity - keycloak: + identityKeycloak: postgresql: image: registry: 709825985650.dkr.ecr.us-east-1.amazonaws.com repository: camunda/camunda8/postgresql tag: 15.5.0 - image: registry: 709825985650.dkr.ecr.us-east-1.amazonaws.com repository: camunda/camunda8/keycloak @@ -367,7 +366,7 @@ elasticsearch: Create a namespace to put this deployment into, and set the current context into that namespace -``` +```shell kubectl create namespace camunda kubectl config set-context --current --namespace=camunda ``` @@ -376,7 +375,7 @@ kubectl config set-context --current --namespace=camunda Log into the AWS ECR: -``` +```shell aws ecr get-login-password \ --region us-east-1 | helm registry login \ --username AWS \ @@ -387,7 +386,7 @@ aws ecr get-login-password \ Now would be a good time to create a trusted TLS certificate and upload it into the Kubernetes cluster. If you have a certificate ready, you can create a secret named `tls-secret` from it with the following command: -``` +```shell kubectl create secret tls tls-secret --cert= --key= ``` @@ -397,7 +396,7 @@ The `values.yaml` in the previous steps are configured to use a secret named `tl Pull the Helm chart: -``` +```shell mkdir awsmp-chart && cd awsmp-chart helm pull oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/camunda/camunda8/camunda-platform @@ -406,7 +405,7 @@ tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete Run the Helm chart: -``` +```shell helm install camunda \ --namespace camunda \ -f ../values.yaml \ diff --git a/docs/self-managed/setup/deploy/azure/microsoft-aks.md b/docs/self-managed/setup/deploy/azure/microsoft-aks.md index f8652cd6b05..d020e960a7e 100644 --- a/docs/self-managed/setup/deploy/azure/microsoft-aks.md +++ b/docs/self-managed/setup/deploy/azure/microsoft-aks.md @@ -35,6 +35,6 @@ should use `Premium SSD` volumes of at least `256 GB` (P15). ### Zeebe Ingress -**Azure Application Gateway Ingress cannot be used as an Ingress for Zeebe/Zeebe-Gateway** because Zeebe requires an Ingress controller that supports `gRPC`. You should use any other Ingress controller that supports `gRPC`, like the [ingress-nginx controller](https://github.com/kubernetes/ingress-nginx). +**Azure Application Gateway Ingress cannot be used as an Ingress for Zeebe/Zeebe Gateway** because Zeebe requires an Ingress controller that supports `gRPC`. You should use any other Ingress controller that supports `gRPC`, like the [ingress-nginx controller](https://github.com/kubernetes/ingress-nginx). Currently, the Azure Application Gateway Ingress controller doesn't support `gRPC`. For more details, follow the upstream [GitHub issue about gRPC/HTTP2 support](https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1015). diff --git a/docs/self-managed/setup/deploy/local/local-kubernetes-cluster.md b/docs/self-managed/setup/deploy/local/local-kubernetes-cluster.md index 3b3f1ed4b91..5e110871e66 100644 --- a/docs/self-managed/setup/deploy/local/local-kubernetes-cluster.md +++ b/docs/self-managed/setup/deploy/local/local-kubernetes-cluster.md @@ -176,18 +176,18 @@ Add the following values to `camunda-platform-core-kind-values.yaml` to allow Ca ```yaml global: ingress: - enabled: true - className: nginx - host: "camunda.local" + enabled: true + className: nginx + host: "camunda.local" operate: - contextPath: "/operate" + contextPath: "/operate" tasklist: - contextPath: "/tasklist" + contextPath: "/tasklist" -zeebe-gateway: -ingress: +zeebeGateway: + ingress: enabled: true className: nginx host: "zeebe.camunda.local" diff --git a/docs/self-managed/setup/deploy/local/manual.md b/docs/self-managed/setup/deploy/local/manual.md index 6782b34217e..77f232fbba2 100644 --- a/docs/self-managed/setup/deploy/local/manual.md +++ b/docs/self-managed/setup/deploy/local/manual.md @@ -34,7 +34,7 @@ Please ensure to check compatability of [supported environments](/reference/supp To run Elasticsearch, execute the following commands: -```bash +```shell cd elasticsearch-* bin/elasticearch ``` @@ -51,7 +51,7 @@ Once you've downloaded a Zeebe distribution, extract it into a folder of your ch To extract the Zeebe distribution and start the broker, **Linux users** can type the following: -```bash +```shell tar -xzf zeebe-distribution-X.Y.Z.tar.gz -C zeebe/ ./bin/broker ``` @@ -74,7 +74,7 @@ Once the Zeebe broker has started, it should produce the following output: To run Zeebe with the Elasticsearch Exporter that is needed for Operate, Tasklist and Optimize to work, execute the following commands: -```bash +```shell cd camunda-cloud-zeebe-* ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_CLASSNAME=io.camunda.zeebe.exporter.ElasticsearchExporter ./bin/broker ``` @@ -88,7 +88,7 @@ You’ll know Zeebe has started successfully when you see a message similar to t You can test the Zeebe Gateway by asking for the cluster topology with [zbtcl](/apis-tools/cli-client/index.md#usage): -```bash +```shell ./bin/zbctl --insecure status ``` @@ -109,7 +109,7 @@ Brokers: To run Operate, execute the following command: -```bash +```shell cd camunda-cloud-operate-* bin/operate ``` @@ -140,7 +140,7 @@ To update Operate versions, visit the [guide to update guide](/self-managed/oper To run Tasklist, execute the following commands: -```bash +```shell cd camunda-cloud-tasklist-* ./bin/tasklist ``` @@ -155,7 +155,7 @@ You’ll know Tasklist has started successfully when you see messages similar to The Tasklist web interface is available at [http://localhost:8080](http://localhost:8080). Note, that this is the same default port as Operate, so you might have to configure Tasklist (or Operate) to use another port: -```bash +```shell cd camunda-cloud-tasklist-* SERVER_PORT=8081 ./bin/tasklist ``` @@ -186,7 +186,7 @@ Consider the following file structure: To start Connectors bundle with all custom Connectors locally, run: -```bash +```shell java -cp "/home/user/bundle-with-connector/*" "io.camunda.connector.runtime.app.ConnectorRuntimeApplication" ``` @@ -210,7 +210,7 @@ Consider the following file structure: To start Connector runtime with all custom Connectors locally, run: -```bash +```shell java -cp "/home/user/runtime-only-with-connector/*" "io.camunda.connector.runtime.app.ConnectorRuntimeApplication" ``` diff --git a/docs/self-managed/setup/deploy/openshift/redhat-openshift.md b/docs/self-managed/setup/deploy/openshift/redhat-openshift.md index ce733ec309c..49f2f0e3f65 100644 --- a/docs/self-managed/setup/deploy/openshift/redhat-openshift.md +++ b/docs/self-managed/setup/deploy/openshift/redhat-openshift.md @@ -131,36 +131,34 @@ elasticsearch: runAsUser: null # omit this section if identity.enabled is false -identity: - # omit this section if identity.keycloak.enabled is false - keycloak: - containerSecurityContext: - runAsUser: null - podSecurityContext: - fsGroup: null - runAsUser: null - postgresql: - # omit this section if identity.keycloak.postgresql.primary.enabled is false - primary: - containerSecurityContext: - runAsUser: null - podSecurityContext: - fsGroup: null - runAsUser: null - # omit this section if identity.keycloak.postgresql.readReplicas.enabled is false - readReplicas: - containerSecurityContext: - runAsUser: null - podSecurityContext: - fsGroup: null - runAsUser: null - # omit this section if identity.keycloak.postgresql.metrics.enabled is false - metrics: - containerSecurityContext: - runAsUser: null - podSecurityContext: - fsGroup: null - runAsUser: null +identityKeycloak: + containerSecurityContext: + runAsUser: null + podSecurityContext: + fsGroup: null + runAsUser: null + postgresql: + # omit this section if identityKeycloak.postgresql.primary.enabled is false + primary: + containerSecurityContext: + runAsUser: null + podSecurityContext: + fsGroup: null + runAsUser: null + # omit this section if identityKeycloak.postgresql.readReplicas.enabled is false + readReplicas: + containerSecurityContext: + runAsUser: null + podSecurityContext: + fsGroup: null + runAsUser: null + # omit this section if identityKeycloak.postgresql.metrics.enabled is false + metrics: + containerSecurityContext: + runAsUser: null + podSecurityContext: + fsGroup: null + runAsUser: null ``` When installing the chart, run the following: @@ -179,7 +177,7 @@ If using a post-renderer, you **must** use the post-renderer whenever you are up While you can use your preferred `post-renderer`, we provide one (included in the chart archive) which requires only `bash` and `sed` to be available locally: -```bash +```shell #!/bin/bash -eu # Expected usage is as an Helm post renderer. # Example usage: @@ -221,37 +219,35 @@ elasticsearch: fsGroup: "@@null@@" runAsUser: "@@null@@" -# omit this section if identity.enabled is false -identity: - # omit this section if identity.keycloak.enabled is false - keycloak: - containerSecurityContext: - runAsUser: "@@null@@" - podSecurityContext: - fsGroup: "@@null@@" - runAsUser: "@@null@@" - postgresql: - # omit this section if identity.keycloak.postgresql.primary.enabled is false - primary: - containerSecurityContext: - runAsUser: "@@null@@" - podSecurityContext: - fsGroup: "@@null@@" - runAsUser: "@@null@@" - # omit this section if identity.keycloak.postgresql.readReplicas.enabled is false - readReplicas: - containerSecurityContext: - runAsUser: "@@null@@" - podSecurityContext: - fsGroup: "@@null@@" - runAsUser: "@@null@@" - # omit this section if identity.keycloak.postgresql.metrics.enabled is false - metrics: - containerSecurityContext: - runAsUser: "@@null@@" - podSecurityContext: - fsGroup: "@@null@@" - runAsUser: "@@null@@" + # omit this section if identityKeycloak.enabled is false +identityKeycloak: + containerSecurityContext: + runAsUser: "@@null@@" + podSecurityContext: + fsGroup: "@@null@@" + runAsUser: "@@null@@" + postgresql: + # omit this section if identityKeycloak.postgresql.primary.enabled is false + primary: + containerSecurityContext: + runAsUser: "@@null@@" + podSecurityContext: + fsGroup: "@@null@@" + runAsUser: "@@null@@" + # omit this section if identityKeycloak.postgresql.readReplicas.enabled is false + readReplicas: + containerSecurityContext: + runAsUser: "@@null@@" + podSecurityContext: + fsGroup: "@@null@@" + runAsUser: "@@null@@" + # omit this section if identityKeycloak.postgresql.metrics.enabled is false + metrics: + containerSecurityContext: + runAsUser: "@@null@@" + podSecurityContext: + fsGroup: "@@null@@" + runAsUser: "@@null@@" ``` Now, when installing the chart, you can do so by running the following: diff --git a/docs/self-managed/setup/deploy/other/docker.md b/docs/self-managed/setup/deploy/other/docker.md index d2489982731..1dfd95b1727 100644 --- a/docs/self-managed/setup/deploy/other/docker.md +++ b/docs/self-managed/setup/deploy/other/docker.md @@ -26,8 +26,8 @@ The provided Docker images are supported for production usage only on Linux syst Zeebe is the only component that is often run on its own as a standalone component. In this scenario, it does not need anything else, so a simple `docker run` is sufficient: -```bash -docker run --name zeebe -p 8080:8080 -p 26500-26502:26500-26502 camunda/zeebe:latest +```shell +docker run --name zeebe -p 26500-26502:26500-26502 camunda/zeebe:latest ``` This will give you a single broker node with the following ports exposed: @@ -68,7 +68,7 @@ Camunda's private Docker registry. To pull the images you first need to log in using the credentials you received from Camunda: -```bash +```shell $ docker login registry.camunda.cloud Username: your_username Password: ****** @@ -219,6 +219,6 @@ ADD https://repo1.maven.org/maven2/io/camunda/connector/connector-http-json/x.y. You can also add a Connector JAR using volumes: -```bash +```shell docker run --rm --name=connectors -d -v $PWD/connector.jar:/opt/app/ camunda/connectors:x.y.z ``` diff --git a/docs/self-managed/setup/guides/air-gapped-installation.md b/docs/self-managed/setup/guides/air-gapped-installation.md index 4952f25ca36..43bbabaf800 100644 --- a/docs/self-managed/setup/guides/air-gapped-installation.md +++ b/docs/self-managed/setup/guides/air-gapped-installation.md @@ -9,7 +9,7 @@ With the dependencies in third-party Docker images and Helm charts, additional s To find out the necessary Docker images for your Helm release, note that the required images depend on the values you specify for your deployment. You can get an overview of all required images by running the following command: -``` +```shell helm repo add camunda https://helm.camunda.io helm repo update helm template camunda/camunda-platform -f values.yaml | grep 'image:' @@ -43,7 +43,7 @@ Please note that all the required Docker images, available on DockerHub's Camund For example, the Docker image of Zeebe can be pulled via DockerHub or via the Camunda's Docker Registry: -```bash +```shell docker pull camunda/zeebe:latest docker pull registry.camunda.cloud/camunda/zeebe:latest ``` @@ -68,14 +68,16 @@ Identity utilizes Keycloak and allows you to manage users, roles, and permission camunda-platform |_ elasticsearch |_ identity - |_ keycloak - |_ postgresql + |_ identityKeycloak + |_ postgresql |_ zeebe + |_ zeebeGateway |_ optimize |_ operate |_ tasklist |_ connectors - |_ postgresql + |_ webModeler + |_ webModelerPostgresql ``` - Keycloak is a dependency for Camunda Identity and PostgreSQL is a dependency for Keycloak. @@ -89,12 +91,10 @@ The values for the dependencies Keycloak and PostgreSQL can be set in the same h ```yaml identity: [identity values] - keycloak: - [keycloak values] - postgresql: - [postgresql values] -postgresql: - [postgresql values] +identityKeycloak: + [keycloak values] + postgresql: + [postgresql values] ``` ## Push Docker images to your repository @@ -103,13 +103,13 @@ All the [required Docker images](#required-docker-images) need to be pushed to y 1. Tag your image using the following command (replace ``, ``, and `` with the corresponding values.) -``` +```shell docker tag example.jfrog.io/camunda/: ``` 2. Push your image using the following command: -``` +```shell docker push example.jfrog.io/camunda/: ``` @@ -122,7 +122,7 @@ For details about hosting options, visit the [chart repository guide](https://he You must add your Helm chart repositories to use the charts: -``` +```shell helm repo add camunda https://example.jfrog.io/artifactory/api/helm/camunda-platform helm repo add elastic https://example.jfrog.io/artifactory/api/helm/elastic helm repo add bitnami https://example.jfrog.io/artifactory/api/helm/bitnami @@ -139,7 +139,7 @@ zeebe: repository: example.jfrog.io/camunda/zeebe # e.g. work with the latest versions in development tag: latest -zeebe-gateway: +zeebeGateway: image: repository: example.jfrog.io/camunda/zeebe tag: latest @@ -150,14 +150,14 @@ identity: image: repository: example.jfrog.io/camunda/identity ... - keycloak: +identityKeycloak: + image: + repository: example.jfrog.io/bitnami/keycloak + ... + postgresql: image: - repository: example.jfrog.io/bitnami/keycloak + repository: example.jfrog.io/bitnami/postgres ... - postgresql: - image: - repository: example.jfrog.io/bitnami/postgres - ... operate: image: repository: example.jfrog.io/camunda/operate @@ -189,14 +189,14 @@ webModeler: image: repository: camunda/modeler-websockets ... -# only necessary if the PostgreSQL chart dependency is used for Web Modeler -postgresql: +webModelerPostgresql: image: repository: example.jfrog.io/bitnami/postgres + ... ``` Afterwards, you can deploy Camunda using Helm and the custom values file. -``` +```shell helm install my-camunda-platform camunda/camunda-platform -f values.yaml ``` diff --git a/docs/self-managed/setup/guides/ingress-setup.md b/docs/self-managed/setup/guides/ingress-setup.md index 1eba1aa6748..241a5e44211 100644 --- a/docs/self-managed/setup/guides/ingress-setup.md +++ b/docs/self-managed/setup/guides/ingress-setup.md @@ -100,10 +100,10 @@ helm install demo camunda/camunda-platform -f values-combined-ingress.yaml Once deployed, you can access the Camunda 8 components on: -- **Web applications:** `https://camunda.example.com/[identity|operate|optimize|tasklist|modeler|console|zeebe]` +- **Applications:** `https://camunda.example.com/[identity|operate|optimize|tasklist|modeler|console|zeebe]` - _Note_: Web Modeler also exposes a WebSocket endpoint on `https://camunda.example.com/modeler-ws`. This is only used by the application itself and not supposed to be accessed by users directly. - **Keycloak authentication:** `https://camunda.example.com/auth` -- **Zeebe Gateway:** `https://zeebe.camunda.example.com` +- **Zeebe Gateway:** `grpc://zeebe.camunda.example.com` ## Separated Ingress setup @@ -132,7 +132,7 @@ global: redirectUrl: "https://optimize.camunda.example.com" webModeler: redirectUrl: "https://modeler.camunda.example.com" - Console: + console: redirectUrl: "https://console.camunda.example.com" identity: @@ -142,11 +142,11 @@ identity: host: "identity.camunda.example.com" fullURL: "https://identity.camunda.example.com" - keycloak: - ingress: - enabled: true - ingressClassName: nginx - hostname: "keycloak.camunda.example.com" +identityKeycloak: + ingress: + enabled: true + ingressClassName: nginx + hostname: "keycloak.camunda.example.com" operate: ingress: @@ -186,7 +186,7 @@ webModeler: websockets: host: "modeler-ws.camunda.example.com" -Console: +console: ingress: enabled: true className: nginx @@ -210,9 +210,9 @@ helm install demo camunda/camunda-platform -f values-separated-ingress.yaml Once deployed, you can access the Camunda 8 components on: -- **Web applications:** `https://[identity|operate|optimize|tasklist|modeler|console|zeebe].camunda.example.com` +- **Applications:** `https://[identity|operate|optimize|tasklist|modeler|console|zeebe].camunda.example.com` - **Keycloak authentication:** `https://keycloak.camunda.example.com` -- **Zeebe Gateway:** `https://zeebe-grpc.camunda.example.com` +- **Zeebe Gateway:** `grpc://zeebe-grpc.camunda.example.com` ## Ingress controllers @@ -240,11 +240,11 @@ To install this [ingress-nginx controller](https://github.com/kubernetes/ingress ```shell helm install -f ingress_nginx_values.yml \ -ingress-nginx ingress-nginx \ ---repo https://kubernetes.github.io/ingress-nginx \ ---version "4.9.0" \ ---namespace ingress-nginx \ ---create-namespace + ingress-nginx ingress-nginx \ + --repo https://kubernetes.github.io/ingress-nginx \ + --version "4.9.0" \ + --namespace ingress-nginx \ + --create-namespace ``` ## Troubleshooting diff --git a/docs/self-managed/setup/guides/multi-namespace-deployment.md b/docs/self-managed/setup/guides/multi-namespace-deployment.md index cb279603e28..5e0ec5f848c 100644 --- a/docs/self-managed/setup/guides/multi-namespace-deployment.md +++ b/docs/self-managed/setup/guides/multi-namespace-deployment.md @@ -22,6 +22,8 @@ global: host: camunda-main.example.com identity: auth: + console: + existingSecret: connectors: existingSecret: operate: @@ -37,7 +39,7 @@ global: existingSecret: zeebe: enabled: false -zeebe-gateway: +zeebeateway: enabled: false operate: enabled: false @@ -53,7 +55,7 @@ elasticsearch: Install Camunda Management cluster with Helm: -```bash +```shell helm install camunda camunda/camunda-platform \ -n camunda-main \ -f camunda-main.yaml @@ -92,13 +94,13 @@ identity: enabled: false webModeler: enabled: false -postgresql: +webModelerPostgresql: enabled: false ``` Then, install as usual: -```bash +```shell helm template camunda camunda/camunda-platform \ -n camunda-team01 \ -f camunda-team01.yaml @@ -137,13 +139,13 @@ identity: enabled: false webModeler: enabled: false -postgresql: +webModelerPostgresql: enabled: false ``` Then, install as usual: -```bash +```shell helm install camunda camunda/camunda-platform \ -n camunda-team02 \ -f camunda-team02.yaml @@ -157,7 +159,7 @@ Update Management deployment to deploy Console Self-Managed. For more details, v Assuming Camunda clusters have been deployed using the above examples, run the following script to get the release information for all deployments. -```bash +```shell DEPLOYMENTS="camunda-main camunda-team01 camunda-team02" for DEPLOYMENT in ${DEPLOYMENTS}; do @@ -206,8 +208,8 @@ console: metrics: http://camunda-tasklist.camunda-team01:80/actuator/prometheus - name: Zeebe Gateway url: - grpc: http://camunda-zeebe-gateway-grpc.camunda-team01:80 - http: http://camunda-zeebe-gateway.camunda-team01:80 + grpc: grpc://zeebe.camunda-team01.example.com + http: http://camunda-team01.example.com/zeebe readiness: http://camunda-zeebe-gateway.camunda-team01:9600/actuator/health/readiness metrics: http://camunda-zeebe-gateway.camunda-team01:9600/actuator/prometheus - name: Zeebe @@ -230,8 +232,8 @@ console: metrics: http://camunda-tasklist.camunda-team02:80/actuator/prometheus - name: Zeebe Gateway url: - grpc: http://camunda-zeebe-gateway.camunda-team02:80 - http: http://camunda-team02.example.com:80 + grpc: grpc://zeebe.camunda-team02.example.com + http: http://camunda-team02.example.com/zeebe readiness: http://camunda-zeebe-gateway.camunda-team02:9600/actuator/health/readiness metrics: http://camunda-zeebe-gateway.camunda-team02:9600/actuator/prometheus - name: Zeebe diff --git a/docs/self-managed/setup/guides/upgrade.md b/docs/self-managed/setup/guides/upgrade.md deleted file mode 100644 index f26e07574a3..00000000000 --- a/docs/self-managed/setup/guides/upgrade.md +++ /dev/null @@ -1,625 +0,0 @@ ---- -id: upgrade -title: "Upgrading Camunda 8 Helm deployment" -sidebar_label: "Upgrade" -description: "To upgrade to a more recent version of the Camunda Helm charts, there are certain things you need to keep in mind." ---- - -To upgrade to a more recent version of the Camunda Helm charts, there are certain things you need to keep in mind. - -:::caution - -Ensure to review the [instructions for a specific version](#version-update-instructions) before staring the actual upgrade. - -::: - -### Upgrading where Identity disabled - -Normally for a Helm upgrade, you run the [Helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) command. If you have disabled Camunda Identity and the related authentication mechanism, you should be able to do an upgrade as follows: - -```shell -helm upgrade camunda -``` - -However, if Camunda Identity is enabled (which is the default), the upgrade path is a bit more complex than just running `helm upgrade`. Read the next section to familiarize yourself with the upgrade process. - -### Upgrading where Identity enabled - -If you have installed the Camunda 8 Helm charts before with default values, this means Identity and the related authentication mechanism are enabled. For authentication, the Helm charts generate the secrets randomly if not specified on installation for each web application. If you run `helm upgrade` to upgrade to a newer chart version, you likely will see the following return: - -```shell -helm upgrade camunda-platform-test camunda/camunda-platform -``` - -You likely will see the following error: - -```shell -Error: UPGRADE FAILED: execution error at (camunda-platform/charts/identity/templates/tasklist-secret.yaml:10:22): -PASSWORDS ERROR: You must provide your current passwords when upgrading the release. - Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims. - Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases - - 'global.identity.auth.tasklist.existingSecret' must not be empty, please add '--set global.identity.auth.tasklist.existingSecret=$TASKLIST_SECRET' to the command. To get the current value: - - export TASKLIST_SECRET=$(kubectl get secret --namespace "camunda" "camunda-platform-test-tasklist-identity-secret" -o jsonpath="{.data.tasklist-secret}" | base64 --decode) -``` - -As mentioned, this output returns because secrets are randomly generated with the first Helm installation by default if not further specified. We use a library chart [provided by Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/common) for this. The generated secrets persist on persistent volume claims (PVCs), which are not maintained by Helm. - -If you remove the Helm chart release or do an upgrade, PVCs are not removed nor recreated. On an upgrade, secrets can be recreated by Helm, and could lead to the regeneration of the secret values. This would mean that newly-generated secrets would no longer match with the persisted secrets. To avoid such an issue, Bitnami blocks the upgrade path and prints the help message as shown above. - -In the error message, Bitnami links to their [troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases). However, to avoid confusion, we will step through the troubleshooting process in this guide as well. - -### Secrets extraction - -For a successful upgrade, you first need to extract all secrets which were previously generated. - -:::note -You also need to extract all secrets which were generated for Keycloak, since Keycloak is a dependency of Identity. -::: - -To extract the secrets, use the following code snippet. Make sure to replace `camunda` with your actual Helm release name. - -```shell -export TASKLIST_SECRET=$(kubectl get secret "camunda-tasklist-identity-secret" -o jsonpath="{.data.tasklist-secret}" | base64 --decode) -export OPTIMIZE_SECRET=$(kubectl get secret "camunda-optimize-identity-secret" -o jsonpath="{.data.optimize-secret}" | base64 --decode) -export OPERATE_SECRET=$(kubectl get secret "camunda-operate-identity-secret" -o jsonpath="{.data.operate-secret}" | base64 --decode) -export CONNECTORS_SECRET=$(kubectl get secret "camunda-connectors-identity-secret" -o jsonpath="{.data.connectors-secret}" | base64 --decode) -export ZEEBE_SECRET=$(kubectl get secret "camunda-zeebe-identity-secret" -o jsonpath="{.data.zeebe-secret}" | base64 --decode) -export KEYCLOAK_ADMIN_SECRET=$(kubectl get secret "camunda-keycloak" -o jsonpath="{.data.admin-password}" | base64 --decode) -export KEYCLOAK_MANAGEMENT_SECRET=$(kubectl get secret "camunda-keycloak" -o jsonpath="{.data.management-password}" | base64 --decode) -export POSTGRESQL_SECRET=$(kubectl get secret "camunda-postgresql" -o jsonpath="{.data.postgres-password}" | base64 --decode) -``` - -After exporting all secrets into environment variables, run the following upgrade command: - -```shell -helm upgrade camunda camunda/camunda-platform\ - --set global.identity.auth.tasklist.existingSecret=$TASKLIST_SECRET \ - --set global.identity.auth.optimize.existingSecret=$OPTIMIZE_SECRET \ - --set global.identity.auth.operate.existingSecret=$OPERATE_SECRET \ - --set global.identity.auth.connectors.existingSecret=$CONNECTORS_SECRET \ - --set global.identity.auth.zeebe.existingSecret=$ZEEBE_SECRET \ - --set identity.keycloak.auth.adminPassword=$KEYCLOAK_ADMIN_SECRET \ - --set identity.keycloak.auth.managementPassword=$KEYCLOAK_MANAGEMENT_SECRET \ - --set identity.keycloak.postgresql.auth.password=$POSTGRESQL_SECRET -``` - -:::note -If you have specified on the first installation certain values, you have to specify them again on the upgrade either via `--set` or the values file and the `-f` flag. -::: - -For more details on the Keycloak upgrade path, you can also read the [Bitnami Keycloak upgrade guide](https://docs.bitnami.com/kubernetes/apps/keycloak/administration/upgrade/). - -## Version update instructions - -### v9.3.0 - -#### Enabling Console - -When enabling Console for the first time, you may see the following error: - -> Something went wrong -> We're sorry! The following errors were thrown in the backend. 401 jwt audience invalid. expected: console-api - -The default user does not automatically get access to the Console role. - -To add the Console role: - -1. Log in to Identity. -2. Click on the **Users** tab. -3. Select your user. -4. Click **Assigned roles**. -5. Select **Console** to grant full access to Console. -6. Click **Add**. - -You should now be able to log into Console. - -### v9.0.0 - -For full change log, view the Camunda Helm chart [v9.0.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-9.0.0). - -#### Helm chart - -As of the 8.4 release cycle, the Camunda 8 **Helm chart** version is decoupled from the version of the application. The Helm chart release still follows the applications release cycle, but it has an independent version. (e.g., in the application release cycle 8.4, the chart version is 9.0.0). - -For more details about the applications version included in the Helm chart, review the [full version matrix](https://helm.camunda.io/camunda-platform/version-matrix/). - -#### Identity - -:::caution Potential breaking changes -By default this change isn't breaking change unless custom changes made outside Helm chart related to OIDC configuration. -::: - -Cross-components Keycloak-specific configurations has been replaced for a more generic OIDC configuration; Hence, components can use other OIDC-compliant OAuth 2.0 identity providers. - -Accordingly, some unused environment variables have been removed from Web Modeler because of the implementation of custom OIDC support. The naming has also been adjusted to use the newer scheme. - -For more details, check [Connect to an OpenID Connect provider](/self-managed/setup/guides/connect-to-an-oidc-provider.md). - -#### Keycloak - -The embedded Keycloak Helm chart has been upgraded from 16.1.7 to 17.3.6 (only the Keycloak Helm chart has been upgrade, the actual Keycloak version still on 22.0.5). - -#### Elasticsearch - -Elasticsearch image has been upgraded from 8.8.2 to 8.9.2. - -### v8.3.1 - -:::caution -The following steps are applied when upgrading from **any** previous version, including `8.3.0`. -::: - -To fix a critical issue, the following components had labels change: Operate, Optimize, Tasklist, Zeebe, and Zeebe Gateway. - -Therefore, before upgrading from any previous versions, delete the `Deployment/StatefulSet`. There will be a downtime between the resource deletion and the actual upgrade. - -```shell -kubectl -n camunda delete deployment camunda-operate -kubectl -n camunda delete deployment camunda-tasklist -kubectl -n camunda delete deployment camunda-optimize -kubectl -n camunda delete deployment camunda-zeebe-gateway -kubectl -n camunda delete statefulset camunda-zeebe -``` - -Then, follow the upgrade process as usual. - -#### Zeebe Gateway - -This change has no effect on the usual upgrade using Helm CLI. However, it could be relevant if you are using Helm post-rendering via other tools like Kustomize. - -The following resources have been renamed: - -- **ConfigMap:** From `camunda-zeebe-gateway-gateway` to `camunda-zeebe-gateway`. -- **ServiceAccount:** From `camunda-zeebe-gateway-gateway` to `camunda-zeebe-gateway`. - -### v8.3.0 (minor) - -:::caution -Updating Operate, Tasklist, and Optimize from 8.2.x to 8.3.0 will potentially take longer than expected, depending on the data to be migrated. -Additionally, we identified some bugs that could also prevent the migration from succeeding. These are being addressed and will be available in an upcoming 8.3.1 patch. We suggest not updating until the patch is released. -::: - -For full change log, view the Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). - -:::caution Breaking changes - -- Elasticsearch upgraded from v7.x to v8.x. -- Keycloak upgraded from v19.x to v22.x. -- Zeebe runs as a non-root user by default. - -::: - -#### Elasticsearch - -Elasticsearch upgraded from v7.x to v8.x. Follow the Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. - -##### Elasticsearch - values file - -The syntax of the chart values file has been changed due to the upgrade. There are two cases based on if you use the default values or custom values. - -**Case One:** Default values.yaml - -If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. - -**Case Two:** Custom values.yaml - -If you have a custom `values.yaml`, change the image repository and tag: - -```yaml -image: - repository: bitnami/elasticsearch - tag: 8.6.2 -``` - -Setting the persistent volume size of the master nodes can't be done using the `volumeClaimTemplate` anymore. It must be done using the master values: - -```yaml -master: - masterOnly: false - heapSize: 1024m - persistence: - size: 64Gi -``` - -Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. The `retentionPolicy` should be used in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: - -```yaml -retention: - enabled: false - minimumAge: 30d -``` - -In the global section, the host to show to release-name should be changed as well: - -```yaml -host: "{{ .Release.Name }}-elasticsearch" -``` - -##### Elasticsearch - Data retention - -The Elasticsearch 8 chart is using different PVC names. Therefore, it's required to migrate the old PVCs to the new names, which could be done in two ways: automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). - -:::caution - -In call cases, the following steps must be executed **before** the upgrade. - -::: - -**Option One:** CSI volume cloning - -This method will take advantage of the CSI volume cloning functionality from the CSI driver. - -Prerequisites: - -1. The Kubernetes cluster should be at least v1.20. -2. The CSI driver must be present on your cluster. - -Clones are provisioned like any other PVC with a reference to an existing PVC in the same namespace. - -Before applying this manifest, ensure to scale the Elasticsearch replicas to 0. Also, ensure the `dataSource.name` matches the PVC that you would like to clone. - -Here is an example YAML file for cloning the Elasticsearch PVC: - -First, stop Elasticsearch pods: - -```shell -kubectl scale statefulset elasticsearch-master --replicas=0 -``` - -Then, clone the PVC (this example is for one PVC, usually you have two PVCs): - -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - labels: - app.kubernetes.io/component: master - app.kubernetes.io/instance: integration - app.kubernetes.io/name: elasticsearch - name: data-integration-elasticsearch-master-0 -spec: - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 64Gi - dataSource: - name: elasticsearch-master-elasticsearch-master-0 - kind: PersistentVolumeClaim -``` - -For reference, visit [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). - -**Option Two**: Update PV manually - -This approach works with any Kubernetes cluster. - -1. Get the name of PV for both Elasticsearch master PVs. -2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. - -First, get the PV from PVC: - -```shell -ES_PV_NAME0="$(kubectl get pvc elasticsearch-master-elasticsearch-master-0 -o jsonpath='{.spec.volumeName}')" -``` - -Then, change the Reclaim Policy: - -```shell -kubectl patch pv "${ES_PV_NAME0}" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -``` - -Finally, verify the Reclaim Policy has been changed: - -```shell -kubectl get pv "${ES_PV_NAME0}" | grep Retain || echo '[ERROR] Reclaim Policy is not Retain!' -``` - -Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: - -```yaml -claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: data-camunda-elasticsearch-master-0 - namespace: -``` - -After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. Then, proceed with the upgrade. - -#### Keycloak - -Keycloak upgraded from v19.x to v22.x, which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because of the Keycloak major version upgrade. Ensure you back up the Keycloak database before the upgrade. - -:::note -The Keycloak PostgreSQL chart shows some warnings which are safe to ignore. That false positive issue has been reported, and it should be fixed in the next releases of the upstream PostgreSQL Helm chart. -::: - -``` -coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.egressRules.customRules is a table. Ignoring non-table value ([]) -coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.readReplicasAccessOnlyFrom.customRules is a table. Ignoring non-table value ([]) -coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.primaryAccessOnlyFrom.customRules is a table. Ignoring non-table value ([]) -false -``` - -#### Zeebe - -Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: - -**Option One:** Use Zeebe user ID (recommended) - -Change `podSecurityContext.fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID `1000`. That will modify the group permissions of all persistent volumes attached to that Pod. - -```yaml -zeebe: - podSecurityContext: - fsGroup: 1000 -``` - -If you already modify the current running user, then the `fsGroup` needs to be changed to match the UID. - -```yaml -zeebe: - containerSecurityContext: - runAsUser: 1008 - podSecurityContext: - fsGroup: 1008 -``` - -Some storage classes may not support the `fsGroup` option. In this case, a possibility is to run a debug Pod to chown the mounted volumes. - -**Option Two:** Use root user ID - -If the recommended solution does not help, you may change the running user back to root. - -```yaml -zeebe: - containerSecurityContext: - runAsUser: 0 -``` - -#### Web-Modeler - -The configuration format of external database has been changed in Web Modeler from `host`, `port`, `database` to `JDBC URL`. - -The old format: - -```yaml -webModeler: - restapi: - externalDatabase: - host: web-modeler-postgres-ext - port: 5432 - database: rest-api-db -``` - -The new format: - -```yaml -webModeler: - restapi: - externalDatabase: - url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db" -``` - -### v8.2.9 - -#### Optimize - -For Optimize 3.10.1, a new environment variable introduced redirection URL. However, the change is not compatible with Camunda Helm charts until it is fixed in 3.10.3 (and Helm chart 8.2.9). Therefore, those versions are coupled to certain Camunda Helm chart versions: - -| Optimize version | Camunda Helm chart version | -| --------------------------------- | -------------------------- | -| Optimize 3.10.1 & Optimize 3.10.2 | 8.2.0 - 8.2.8 | -| Optimize 3.10.3 | 8.2.9+ | - -No action is needed if you use Optimize 3.10.3 (shipped with this Helm chart version by default), but this Optimize version cannot be used out of the box with previous Helm chart versions. - -### v8.2.3 - -#### Zeebe Gateway - -:::caution Breaking change - -Zeebe Gateway authentication is now enabled by default. - -::: - -To authenticate: - -1. [Create a client credential](/guides/setup-client-connection-credentials.md). -2. [Assign permissions to the application](/self-managed/identity/user-guide/authorizations/managing-resource-authorizations.md). -3. Connect with: - -- [Desktop Modeler](/components/modeler/desktop-modeler/connect-to-camunda-8.md). -- [Zeebe client (zbctl)](/self-managed/zeebe-deployment/security/secure-client-communication.md#zbctl). - -### v8.2.0 (Minor) - -#### Connectors - -Camunda 8 Connectors component is one of our applications which performs the integration with an external system. - -Currently, in all cases, either you will use Connectors v8.2 or not, this step should be done. You need to create the Connectors secret object (more details about this in [camunda-platform-helm/656](https://github.com/camunda/camunda-platform-helm/issues/656)). - -First, generate the Connectors secret: - -```bash -helm template camunda camunda/camunda-platform --version 8.2 \ - --show-only charts/identity/templates/connectors-secret.yaml > - identity-connectors-secret.yaml -``` - -Then apply it: - -```bash -kubectl apply --namespace -f identity-connectors-secret.yaml -``` - -#### Keycloak - -Camunda v8.2 uses Keycloak v19 which depends on PostgreSQL v15. That is a major change for the dependencies. Currently there are two recommended options to upgrade from Camunda 8.1.x to 8.2.x: - -1. Use the previous version of PostgreSQL v14 in Camunda v8.2, this should be simple and it will work seamlessly. -2. Follow the official PostgreSQL upgrade guide: [Upgrading a PostgreSQL Cluster v15](https://www.postgresql.org/docs/15/upgrading.html). However, it requires some manual work and longer downtime to do the database schema upgrade. - -**Method 1: Use the previous version PostgreSQL v14** - -You can set the PostgreSQL image tag as follows: - -```yaml -identity: - keycloak: - postgresql: - image: - tag: 14.5.0 -``` - -Then follow the [typical upgrade steps](#upgrading-where-identity-enabled). - -**Method 2: Upgrade the database schema to work with PostgreSQL v15** - -The easiest way to upgrade major versions of postgresql is to start a port-forward, -and then run `pg_dump` or `pg_restore`. The postgresql client versions are fairly flexible -with different server versions, but for best results, we recommend using the newest -client version. - -1. In one terminal, start a `port-forward` against the postgresql service: - -```bash -kubectl port-forward svc/camunda-postgresql 5432 -``` - -Follow the rest of these steps in a different terminal. - -2. Get the 'postgres' users password from the postgresql service: - -```bash -kubectl exec -it statefulset/camunda-postgresql -- env | grep "POSTGRES_POSTGRES_PASSWORD=" -``` - -3. Scale identity down using the following command: - -```bash -kubectl scale --replicas=0 deployment camunda-identity -``` - -4. Perform the database dump: - -```bash -pg_dumpall -U postgres -h localhost -p 5432 | tee dump.psql -Password: -``` - -`pg_dumpall` may ask multiple times for the same password. The database will be dumped into `dump.psql`. - -5. Scale database down using the following command: - -```bash -kubectl scale --replicas=0 statefulset camunda-postgresql -``` - -6. Delete the PVC for the postgresql instance using the following command: - -```bash -kubectl delete pvc data-camunda-postgresql-0 -``` - -7. Update the postgresql version using the following command: - -```bash -kubectl set image statefulset/camunda-postgresql postgresql=docker.io/bitnami/postgresql:15.3.0 -``` - -8. Scale the services back up using the following command: - -```bash -kubectl scale --replicas=1 statefulset camunda-postgresql -``` - -9. Restore the database dump using the following command: - -```bash -psql -U postgres -h localhost -p 5432 -f dump.psql -``` - -10. Scale up identity using the following command: - -```bash -kubectl scale --replicas=1 deployment camunda-identity -``` - -Then follow the [typical upgrade steps](#upgrading-where-identity-enabled). - -### v8.0.13 - -If you installed Camunda 8 using Helm charts before `8.0.13`, you need to apply the following steps to handle the new Elasticsearch labels. - -As a prerequisite, make sure you have the Elasticsearch Helm repository added: - -```shell -helm repo add elastic https://helm.elastic.co -``` - -#### 1. Retain Elasticsearch Persistent Volume - -First get the name of Elasticsearch Persistent Volumes: - -```shell -ES_PV_NAME0=$(kubectl get pvc elasticsearch-master-elasticsearch-master-0 -o jsonpath="{.spec.volumeName}") - -ES_PV_NAME1=$(kubectl get pvc elasticsearch-master-elasticsearch-master-1 -o jsonpath="{.spec.volumeName}") -``` - -Make sure these are the correct Persistent Volumes: - -```shell -kubectl get persistentvolume $ES_PV_NAME0 $ES_PV_NAME1 -``` - -It should show something like the following (note the name of the claim, it's for Elasticsearch): - -``` -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-80bde37a-3c5b-40f4-87f3-8440e658be75 64Gi RWO Delete Bound camunda/elasticsearch-master-elasticsearch-master-0 standard 20d -pvc-3e9129bc-9415-46c3-a005-00ce3b9b3be9 64Gi RWO Delete Bound camunda/elasticsearch-master-elasticsearch-master-1 standard 20d -``` - -The final step here is to change Persistent Volumes reclaim policy: - -```shell -kubectl patch persistentvolume "${ES_PV_NAME0}" \ - -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' - -kubectl patch persistentvolume "${ES_PV_NAME1}" \ - -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -``` - -#### 2. Update Elasticsearch PersistentVolumeClaim labels - -```shell -kubectl label persistentvolumeclaim elasticsearch-master-elasticsearch-master-0 \ - release=camunda chart=elasticsearch app=elasticsearch-master - -kubectl label persistentvolumeclaim elasticsearch-master-elasticsearch-master-1 \ - release=camunda chart=elasticsearch app=elasticsearch-master -``` - -#### 3. Delete Elasticsearch StatefulSet - -Note that there will be a **downtime** between this step and the next step. - -```shell -kubectl delete statefulset elasticsearch-master -``` - -#### 4. Apply Elasticsearch StatefulSet chart - -```shell -helm template camunda/camunda-platform camunda --version \ - --show-only charts/elasticsearch/templates/statefulset.yaml -``` - -The `CHART_VERSION` is the version you want to update to (`8.0.13` or later). diff --git a/docs/self-managed/setup/guides/using-existing-keycloak.md b/docs/self-managed/setup/guides/using-existing-keycloak.md index 2dd09c4b07d..416fb94e089 100644 --- a/docs/self-managed/setup/guides/using-existing-keycloak.md +++ b/docs/self-managed/setup/guides/using-existing-keycloak.md @@ -32,9 +32,8 @@ global: existingSecret: "stage-keycloak" existingSecretKey: "admin-password" -identity: - keycloak: - enabled: false +identityKeycloak: + enabled: false ``` Then, use the custom values file to [deploy Camunda 8](/self-managed/setup/install.md) as usual. diff --git a/docs/self-managed/setup/install.md b/docs/self-managed/setup/install.md index 1d805f71349..c6d89f56dc1 100644 --- a/docs/self-managed/setup/install.md +++ b/docs/self-managed/setup/install.md @@ -81,7 +81,7 @@ Before deploying Camunda using Helm, you need the following: You have to add the Camunda Helm chart repository to use the charts. Once this is done, Helm can fetch and install charts hosted at [https://helm.camunda.io](https://helm.camunda.io): -```bash +```shell helm repo add camunda https://helm.camunda.io helm repo update ``` @@ -92,7 +92,7 @@ Once this is completed, we will be ready to install the Helm chart hosted in the To install the available Camunda 8 components inside a Kubernetes cluster, you can simply run: -```bash +```shell helm install camunda camunda/camunda-platform ``` @@ -106,7 +106,7 @@ For air-gapped environments, refer to [installing in an air-gapped environment]( Review the progress of your deployment by checking if the Kubernetes pods are up and running with the following: -```bash +```shell kubectl get pods ``` @@ -148,7 +148,7 @@ When you use the Camunda 8 Helm chart, it automatically selects the latest versi To ensure you're installing the most current version of both the chart and its applications/dependencies, use the following command: -```bash +```shell # This will install the latest Camunda Helm chart with the latest applications/dependencies. helm install camunda camunda/camunda-platform \ --values https://helm.camunda.io/camunda-platform/values/values-latest.yaml @@ -156,7 +156,7 @@ helm install camunda camunda/camunda-platform \ If you want to install a previous version of the Camunda componenets, follow this command structure: -```bash +```shell # This will install Camunda Helm chart v8.1.x with the latest applications/dependencies of v8.1.x. helm install camunda camunda/camunda-platform --version 8.1 \ --values https://helm.camunda.io/camunda-platform/values/values-v8.1.yaml @@ -178,7 +178,7 @@ Enterprise components such as Console and Web Modeler are published in Camunda's To enable Kubernetes to pull the images from this registry, first [create an image pull secret](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod) using the credentials you received from Camunda: -```bash +```shell kubectl create secret docker-registry registry-camunda-cloud \ --namespace= --docker-server=registry.camunda.cloud \ @@ -233,8 +233,8 @@ To set up Web Modeler, you need to provide the following required configuration - Web Modeler requires an SMTP server to send notification emails to users. - Configure the database connection - Web Modeler requires a PostgreSQL database as persistent data storage (other database systems are currently not supported). - - _Option 1_: Set `postgresql.enabled: true`. This will install a new PostgreSQL instance as part of the Helm release (using the [PostgreSQL Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/postgresql) by Bitnami as a dependency). - - _Option 2_: Set `postgresql.enabled: false` and configure a [connection to an external database](#optional-configure-external-database). + - _Option 1_: Set `webModelerPostgresql.enabled: true`. This will install a new PostgreSQL instance as part of the Helm release (using the [PostgreSQL Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/postgresql) by Bitnami as a dependency). + - _Option 2_: Set `webModelerPostgresql.enabled: false` and configure a [connection to an external database](#optional-configure-external-database). We recommend specifying these values in a YAML file that you pass to the `helm install` command. A minimum configuration file would look as follows: @@ -253,11 +253,11 @@ webModeler: smtpPassword: secret # Email address to be displayed as sender of emails from Web Modeler fromAddress: no-reply@example.com -postgresql: +webModelerPostgresql: enabled: true ``` -If you don't want to install a new PostgreSQL instance with Helm, but connect Web Modeler to an existing external database, set `postgresql.enabled: false` and provide the values under `webModeler.restapi.externalDatabase`: +If you don't want to install a new PostgreSQL instance with Helm, but connect Web Modeler to an existing external database, set `webModelerPostgresql.enabled: false` and provide the values under `webModeler.restapi.externalDatabase`: ```yaml webModeler: @@ -266,7 +266,7 @@ webModeler: url: jdbc:postgresql://postgres.example.com:5432/modeler-db user: modeler-user password: secret -postgresql: +webModelerPostgresql: # disables the PostgreSQL chart dependency enabled: false ``` @@ -301,13 +301,13 @@ Console Self-Managed requires the Identity component to authenticate. Camunda He Check that each pod is running and ready. If one or more of your pods are still pending, it means it cannot be scheduled onto a node. Usually, this happens because there are insufficient resources that prevent it. Use the `kubectl describe ...` command to check on messages from the scheduler: -```bash +```shell kubectl describe pods ``` If the output of the `describe` command was not helpful, tail the logs of these pods by running the following: -```bash +```shell kubectl logs -f ``` diff --git a/docs/self-managed/setup/upgrade.md b/docs/self-managed/setup/upgrade.md index f577d5b1223..33cdb58af39 100644 --- a/docs/self-managed/setup/upgrade.md +++ b/docs/self-managed/setup/upgrade.md @@ -46,21 +46,23 @@ PASSWORDS ERROR: You must provide your current passwords when upgrading the rele As mentioned, this output returns because secrets are randomly generated with the first Helm installation by default if not further specified. We use a library chart [provided by Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/common) for this. The generated secrets persist on persistent volume claims (PVCs), which are not maintained by Helm. -If you remove the Helm chart release or do an upgrade, PVCs are not removed nor recreated. On an upgrade, secrets can be recreated by Helm, and could lead to the regeneration of the secret values. This would mean that newly-generated secrets would no longer match with the persisted secrets. To avoid such an issue, Bitnami blocks the upgrade path and prints the help message as shown above. +If you remove the Helm chart release or do an upgrade, PVCs are not removed nor recreated. On an upgrade, secrets can be recreated by Helm and could lead to the regeneration of the secret values. This would mean that newly generated secrets would no longer match with the persistent secrets. To avoid such an issue, Bitnami blocks the upgrade path and prints the help message as shown above. In the error message, Bitnami links to their [troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases). However, to avoid confusion, we will step through the troubleshooting process in this guide as well. ### Secrets extraction -For a successful upgrade, you first need to extract all secrets which were previously generated. +For a successful upgrade, you first need to extract all secrets that were previously generated. :::note -You also need to extract all secrets which were generated for Keycloak, since Keycloak is a dependency of Identity. +You also need to extract all secrets that were generated for Keycloak, since Keycloak is a dependency of Identity. ::: To extract the secrets, use the following code snippet. Make sure to replace `camunda` with your actual Helm release name. ```shell +# Uncomment if Console is enabled. +# export CONSOLE_SECRET=$(kubectl get secret "camunda-console-identity-secret" -o jsonpath="{.data.console-secret}" | base64 --decode) export TASKLIST_SECRET=$(kubectl get secret "camunda-tasklist-identity-secret" -o jsonpath="{.data.tasklist-secret}" | base64 --decode) export OPTIMIZE_SECRET=$(kubectl get secret "camunda-optimize-identity-secret" -o jsonpath="{.data.optimize-secret}" | base64 --decode) export OPERATE_SECRET=$(kubectl get secret "camunda-operate-identity-secret" -o jsonpath="{.data.operate-secret}" | base64 --decode) @@ -74,7 +76,9 @@ export POSTGRESQL_SECRET=$(kubectl get secret "camunda-postgresql" -o jsonpath=" After exporting all secrets into environment variables, run the following upgrade command: ```shell -helm upgrade camunda camunda/camunda-platform\ +helm upgrade camunda camunda/camunda-platform \ + # Uncomment if Console is enabled. + # --set global.identity.auth.console.existingSecret=$CONSOLE_SECRET \ --set global.identity.auth.tasklist.existingSecret=$TASKLIST_SECRET \ --set global.identity.auth.optimize.existingSecret=$OPTIMIZE_SECRET \ --set global.identity.auth.operate.existingSecret=$OPERATE_SECRET \ @@ -93,162 +97,132 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ## Version update instructions -### v10.0.0 +As of the 8.4 release, the Camunda 8 **Helm chart** version is decoupled from the version of the application. The Helm chart release still follows the applications release cycle, but it has an independent version. (e.g., in the application release cycle 8.4, the chart version is 9.0.0). -Camunda Release Cycle: 8.5 - -:::caution Breaking changes - -- The Camunda Helm chart v10.0.0 has major changes in the values file structure. Follow the upgrade steps for each component before starting the chart upgrade. -- The Elasticsearch configuration has changed to support external Elasticsearch. - -::: - -#### Identity - -The Camunda Identity component was formerly a sub-chart of the Camunda Helm chart. Now, it is part of the parent Camunda Helm chart. +For more details about the applications version included in the Helm chart, review the [full version matrix](https://helm.camunda.io/camunda-platform/version-matrix/). -There are no changes in the Identity keys, but since the `LabelSelector` `MatchLabels` of a Kubernetes resource are immutable, its deployment should be deleted as the label `app.kubernetes.io/name` has been changed from `identity` to `camunda-platform`. +You can also view all chart versions and application versions via Helm CLI as follows: ```shell -kubectl -n camunda delete -l app.kubernetes.io/name=identity deployment -``` - -#### Identity - Keycloak - -The Identity Keycloak values key has been changed from `identity.keycloak` to `identityKeycloak`. -To migrate, move the values under the new key in the values file. - -Old: - -```yaml -identity: - keycloak: +helm search repo camunda/camunda-platform --versions ``` -New: +## From Camunda 8.4 to 8.5 -```yaml -identityKeycloak: -``` - -#### Identity - PostgreSQL - -The Identity PostgreSQL values key has been changed from `identity.postgresql` to `identityPostgresql`. -To migrate, move the values under the new key in the values file. - -Old: +### Helm Chart 10.0.0 -```yaml -identity: - postgresql: -``` - -New: +:::caution Breaking changes +The Camunda Helm chart v10.0.0 has major changes in the values file structure. Follow the upgrade steps for each component before starting the chart upgrade. +::: -```yaml -identityPostgresql: -``` +#### Deprecation Notes + +The following keys in the values file have been changed in Camunda Helm chart v10.0.0. For compatibility, they are deprecated in the Camunda release cycle 8.5 and they will be removed in the Camunda 8.6 release (October 2024). + +We highly recommend updating the keys in your values file and don't wait till the 8.6 release. + +| Component | Old Key | New Key | +| ------------- | ---------------------------------- | ----------------------------------- | +| Identity | +| | `identity.keycloak` | `identityKeycloak` | +| | `identity.postgresql` | `identityPostgresql` | +| Web Modeler | +| | `postgresql` | `webModelerPostgresql` | +| Zeebe Gateway | +| | `global.zeebePort` | `zeebeGateway.service.grpcPort` | +| | `zeebe-gateway` | `zeebeGateway` | +| | `zeebeGateway.service.gatewayName` | `zeebeGateway.service.grpcName` | +| | `zeebeGateway.service.gatewayPort` | `zeebeGateway.service.grpcPort` | +| | `zeebeGateway.ingress` | `zeebeGateway.ingress.grpc` | +| | - | `zeebeGateway.ingress.rest` | +| Elasticsearch | +| | `global.elasticsearch.url` | Change from a string to a map | +| | `global.elasticsearch.protocol` | `global.elasticsearch.url.protocol` | +| | `global.elasticsearch.host` | `global.elasticsearch.url.host` | +| | `global.elasticsearch.port` | `global.elasticsearch.url.port` | -#### Web Modeler - PostgreSQL +#### Identity -The WebModler PostgreSQL values key has been changed from `postgresql` to `webModelerPostgresql`. -To migrate, move the values under the new key in the values file. +The Camunda Identity component was formerly a sub-chart of the Camunda Helm chart. Now, it is part of the parent Camunda Helm chart. -Old: +There are no changes in the Identity keys, but since the `LabelSelector` and `MatchLabels` of a Kubernetes resource are immutable, its deployment should be deleted as the label `app.kubernetes.io/name` has been changed from `identity` to `camunda-platform`. -```yaml -postgresql: -``` +:::caution Downtime -New: +- This step will lead to temporary downtime in Camunda 8 till the actual upgrade happens. +- This step doesn't affect any stored data and the deployment will be placed again in the upgrade. + ::: -```yaml -webModelerPostgresql: +```shell +kubectl -n camunda delete -l app.kubernetes.io/name=identity deployment ``` -#### Zeebe Gateway - -The Zeebe Gateway values key has been changed from `zeebe-gateway` to `zeebeGateway`. -To migrate, move the values under the new key in the values file. +#### Identity - Keycloak -Old: +In Camunda Helm chart v10.0.0, the Identity Keycloak Helm chart has been upgraded from [v17.3.6](https://artifacthub.io/packages/helm/bitnami/keycloak/17.3.6) to [v19.4.1](https://artifacthub.io/packages/helm/bitnami/keycloak/19.4.1). Which has different defaults. -```yaml -zeebe-gateway: -``` +If, **and only if**, you make a full copy of the Camunda Helm chart values file instead of just overwriting the default value, you need to update your values files and use the new default values. -New: +Namely, the following volumes should be removed from the values since they are now part of the upstream chart: ```yaml -zeebeGateway: +# Note: Since v10.0.0 the Keycloak "identity.keycloak" has been renamed to "identityKeycloak". +# Check the keys deprecation notes above. +identity: + keycloak: + extraVolumes: + - name: config + emptyDir: {} + - name: quarkus + emptyDir: {} + - name: tmp + emptyDir: {} + volumeMounts: + - mountPath: /opt/bitnami/keycloak/conf/ + name: config + - mountPath: /opt/bitnami/keycloak/lib/quarkus + name: quarkus + - mountPath: /tmp + name: tmp ``` -Additionally, with the introduction of the REST API, there are now two ingresses. -Previously, there was only the old gRPC ingress at `zeebe-gateway.ingress`, which is now: +#### Elasticsearch -Old: +In Camunda Helm chart v10.0.0, the Elasticsearch Helm chart has been upgraded from [v19.19.4](https://artifacthub.io/packages/helm/bitnami/elasticsearch/19.19.4) to [v20.0.0](https://artifacthub.io/packages/helm/bitnami/elasticsearch/20.0.0). Which has different defaults. -```yaml -zeebe-gateway: - ingress: - enabled: false - # more properties -``` +If, **and only if**, you make a full copy of the Camunda Helm chart values file instead of just overwriting the default value, you need to update your values files and use the new default values. -New: +Namely, the following volumes should be removed from the values since they are now part of the upstream chart: ```yaml -zeebeGateway: - ingress: - # Define and enable gRPC ingress; keep in mind it does not support context paths - grpc: - enabled: true - # more properties - # Define and enable the REST ingress; this one does support the zeebeGateway.contextPath - # parameter out of the box - rest: - enabled: true - # more properties +elasticsearch: + extraVolumes: + - name: tmp + emptyDir: {} + - name: logs + emptyDir: {} + - name: config-dir + emptyDir: {} + extraVolumeMounts: + - mountPath: /tmp + name: tmp + - mountPath: /usr/share/elasticsearch/logs + name: logs + - mountPath: /usr/share/elasticsearch/config + name: config-dir ``` -:::note -The new `zeebeGateway.contextPath` is added to the deployment path, both for -management (for example, port `9600`) and REST (for example, `8080`), _even if the ingress it not enabled_. -::: - #### Enabling external Elasticsearch -It is possible to use external Elasticsearch. For more information on how to set up external Elasticsearch, refer to [using existing Elasticsearch](./guides/using-existing-elasticsearch.md). - -##### Elasticsearch - values file - -The `global.elasticsearch.disableExporter` field has been deprecated in favor of `global.elasticsearch.enabled`. When `global.elasticsearch.enabled` is set to false, all configurations for Elasticsearch in all components are removed. - -The `global.elasticsearch.url` field has changed. If you are using the default `values.yaml` and have not configured the URL, no change is required. However, if the URL value is used, then instead of specifying a single URL, you must now explicitly specify the protocol, host, and port separately like so: - -```yaml -global: - elasticsearch: - url: - protocol: https - host: example.elasticsearch.com - port: 443 -``` - -Because of this change to the `global.elasticsearch.url` value, the following values have been removed: - -1. `global.elasticsearch.protocol` -2. `global.elasticsearch.host` -3. `global.elasticsearch.port` +In v10.0.0, it is possible to use external Elasticsearch. For more information on how to set up external Elasticsearch, refer to [using existing Elasticsearch](./guides/using-existing-elasticsearch.md). -#### Enabling external AWS managed OpenSearch +#### Enabling external OpenSearch -It is possible to use external AWS managed OpenSearch. For more information on how to set up external AWS managed OpenSearch, refer to [using AWS managed OpenSearch](./guides/using-existing-opensearch.md). +In v10.0.0, it is possible to use external OpenSearch. For more information on how to set up external OpenSearch, refer to [using external OpenSearch](./guides/using-existing-opensearch.md). -### v9.3.0 +## From Camunda 8.3 to 8.4 -Camunda Release Cycle: 8.4 +### Helm Chart 9.3.0 #### Enabling Console @@ -270,9 +244,7 @@ To add the Console role: You should now be able to log into Console. -### v9.0.0 - -Camunda Release Cycle: 8.4 +### Helm Chart 9.0.0 For full change log, view the Camunda Helm chart [v9.0.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-9.0.0). @@ -302,9 +274,9 @@ The embedded Keycloak Helm chart has been upgraded from 16.1.7 to 17.3.6 (only t Elasticsearch image has been upgraded from 8.8.2 to 8.9.2. -### v8.3.1 +## From Camunda 8.2 to 8.3 -Camunda Release Cycle: 8.3 +### Helm Chart 8.3.1 :::caution The following steps are applied when upgrading from **any** previous version, including `8.3.0`. @@ -333,9 +305,7 @@ The following resources have been renamed: - **ConfigMap:** From `camunda-zeebe-gateway-gateway` to `camunda-zeebe-gateway`. - **ServiceAccount:** From `camunda-zeebe-gateway-gateway` to `camunda-zeebe-gateway`. -### v8.3.0 (minor) - -Camunda Release Cycle: 8.3 +### Helm Chart 8.3.0 (minor) :::caution Updating Operate, Tasklist, and Optimize from 8.2.x to 8.3.0 will potentially take longer than expected, depending on the data to be migrated. @@ -565,9 +535,9 @@ webModeler: url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db" ``` -### v8.2.9 +## From Camunda 8.1 to 8.2 -Camunda Release Cycle: 8.2 +### Helm Chart 8.2.9 #### Optimize @@ -580,9 +550,7 @@ For Optimize 3.10.1, a new environment variable introduced redirection URL. Howe No action is needed if you use Optimize 3.10.3 (shipped with this Helm chart version by default), but this Optimize version cannot be used out of the box with previous Helm chart versions. -### v8.2.3 - -Camunda Release Cycle: 8.2 +### Helm Chart 8.2.3 #### Zeebe Gateway @@ -601,9 +569,7 @@ To authenticate: - [Desktop Modeler](/components/modeler/desktop-modeler/connect-to-camunda-8.md). - [Zeebe client (zbctl)](/self-managed/zeebe-deployment/security/secure-client-communication.md#zbctl). -### v8.2.0 (Minor) - -Camunda Release Cycle: 8.2 +### Helm Chart 8.2.0 (Minor) #### Connectors @@ -613,7 +579,7 @@ Currently, in all cases, either you will use Connectors v8.2 or not, this step s First, generate the Connectors secret: -```bash +```shell helm template camunda camunda/camunda-platform --version 8.2 \ --show-only charts/identity/templates/connectors-secret.yaml > identity-connectors-secret.yaml @@ -621,7 +587,7 @@ helm template camunda camunda/camunda-platform --version 8.2 \ Then apply it: -```bash +```shell kubectl apply --namespace -f identity-connectors-secret.yaml ``` @@ -655,7 +621,7 @@ client version. 1. In one terminal, start a `port-forward` against the postgresql service: -```bash +```shell kubectl port-forward svc/camunda-postgresql 5432 ``` @@ -663,19 +629,19 @@ Follow the rest of these steps in a different terminal. 2. Get the 'postgres' users password from the postgresql service: -```bash +```shell kubectl exec -it statefulset/camunda-postgresql -- env | grep "POSTGRES_POSTGRES_PASSWORD=" ``` 3. Scale identity down using the following command: -```bash +```shell kubectl scale --replicas=0 deployment camunda-identity ``` 4. Perform the database dump: -```bash +```shell pg_dumpall -U postgres -h localhost -p 5432 | tee dump.psql Password: ``` @@ -684,45 +650,49 @@ Password: