Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(self-managed): Upgrade instructions for Helm chart 10.0.0 - Camunda 8.5 #3530

Merged
merged 2 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ global:

## Zeebe Ingress (gRPC)

Zeebe requires an Ingress controller that supports `gRPC` which is built on top of `HTTP/2` transport layer. Therefore, to expose Zeebe-Gateway externally, you need the following:
Zeebe requires an Ingress controller that supports `gRPC` which is built on top of `HTTP/2` transport layer. Therefore, to expose Zeebe Gateway externally, you need the following:

1. An Ingress controller that supports `gRPC` ([ingress-nginx controller](https://github.com/kubernetes/ingress-nginx) supports it out of the box).
2. TLS (HTTPS) via [Application-Layer Protocol Negotiation (ALPN)](https://www.rfc-editor.org/rfc/rfc7301.html) enabled in the Zeebe-Gateway Ingress object.
2. TLS (HTTPS) via [Application-Layer Protocol Negotiation (ALPN)](https://www.rfc-editor.org/rfc/rfc7301.html) enabled in the Zeebe Gateway Ingress object.

However, according to the official Kubernetes documentation about [Ingress TLS](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls):

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The following sections explain which adjustments must be made to migrate from Ca
## Helm chart

:::caution Breaking changes
The Camunda Helm chart v10.0.0 that comes with Camunda 8.5 has major changes in the values file structure. Update the values keys before starting the chart upgrade.
The Camunda Helm chart v10.0.0 has major changes in the values file structure. Follow the upgrade steps for each component before starting the chart upgrade.
:::

Carefully follow the [upgrade instructions](/self-managed/setup/upgrade.md#v1000) to upgrade from Camunda Helm chart v9.x.x to Camunda Helm chart v10.x.x.
Expand Down
36 changes: 18 additions & 18 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/dual-region.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Additionally, it is recommended to manifest those changes for future interaction

1. Git clone or fork the repository [c8-multi-region](https://github.com/camunda/c8-multi-region):

```bash
```shell
git clone https://github.com/camunda/c8-multi-region.git
```

Expand All @@ -75,13 +75,13 @@ In addition to namespaces for Camunda installations, create the namespaces for f

4. Execute the script via the following command:

```bash
```shell
. ./export_environment_prerequisites.sh
```

The dot is required to export those variables to your shell and not a spawned subshell.

```bash reference
```shell
https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/export_environment_prerequisites.sh
```

Expand Down Expand Up @@ -201,7 +201,7 @@ To ease working with two clusters, create or update your local `kubeconfig` to c

Update or create your kubeconfig via the [AWS CLI](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html):

```bash
```shell
# the alias allows for easier context switching in kubectl
aws eks --region $REGION_0 update-kubeconfig --name $CLUSTER_0 --alias $CLUSTER_0
aws eks --region $REGION_1 update-kubeconfig --name $CLUSTER_1 --alias $CLUSTER_1
Expand All @@ -221,14 +221,14 @@ You are configuring the CoreDNS from the cluster in **Region 0** to resolve cert

1. Expose `kube-dns`, the in-cluster DNS resolver via an internal load-balancer in each cluster:

```bash
```shell
kubectl --context $CLUSTER_0 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml
kubectl --context $CLUSTER_1 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml
```

2. Execute the script [generate_core_dns_entry.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_core_dns_entry.sh) in the folder `aws/dual-region/scripts/` of the repository to help you generate the CoreDNS config. Make sure that you have previously exported the [environment prerequisites](#environment-prerequisites) since the script builds on top of it.

```bash
```shell
./generate_core_dns_entry.sh
```

Expand All @@ -243,7 +243,7 @@ kubectl --context $CLUSTER_1 apply -f https://raw.githubusercontent.com/camunda/
For illustration purposes only. These values will not work in your environment.
:::

```bash
```shell
./generate_core_dns_entry.sh
Please copy the following between
### Cluster 0 - Start ### and ### Cluster 0 - End ###
Expand Down Expand Up @@ -354,7 +354,7 @@ data:

5. Check that CoreDNS has reloaded for the changes to take effect before continuing. Make sure it contains `Reloading complete`:

```bash
```shell
kubectl --context $CLUSTER_0 logs -f deployment/coredns -n kube-system
kubectl --context $CLUSTER_1 logs -f deployment/coredns -n kube-system
```
Expand All @@ -365,7 +365,7 @@ The script [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blo

1. Execute the [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/test_dns_chaining.sh). Make sure you have previously exported the [environment prerequisites](#environment-prerequisites) as the script builds on top of it.

```bash
```shell
./test_dns_chaining.sh
```

Expand All @@ -381,20 +381,20 @@ You can pull the data from Terraform since you exposed those via `output.tf`.

1. From the Terraform code location `aws/dual-region/terraform`, execute the following to export the access keys to environment variables. This will allow an easier creation of the Kubernetes secret via the command line:

```bash
```shell
export AWS_ACCESS_KEY_ES=$(terraform output -raw s3_aws_access_key)
export AWS_SECRET_ACCESS_KEY_ES=$(terraform output -raw s3_aws_secret_access_key)
```

2. From the folder `aws/dual-region/scripts` of the repository, execute the script [create_elasticsearch_secrets.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/create_elasticsearch_secrets.sh). This will use the exported environment variables from **Step 1** to create the required secret within the Camunda namespaces. Those have previously been defined and exported via the [environment prerequisites](#environment-prerequisites).

```bash
```shell
./create_elasticsearch_secrets.sh
```

3. Unset environment variables to reduce the risk of potential exposure. The script is spawned in a subshell and can't modify the environment variables without extra workarounds:

```bash
```shell
unset AWS_ACCESS_KEY_ES
unset AWS_SECRET_ACCESS_KEY_ES
```
Expand Down Expand Up @@ -462,7 +462,7 @@ The base `camunda-values.yml` in `aws/dual-region/kubernetes` requires adjustmen

1. The bash script [generate_zeebe_helm_values.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_zeebe_helm_values.sh) in the repository folder `aws/dual-region/scripts/` helps generate those values. You only have to copy and replace them within the base `camunda-values.yml`. It will use the exported environment variables of the [environment prerequisites](#environment-prerequisites) for namespaces and regions.

```bash
```shell
./generate_zeebe_helm_values.sh

# It will ask you to provide the following values
Expand All @@ -478,7 +478,7 @@ The base `camunda-values.yml` in `aws/dual-region/kubernetes` requires adjustmen
For illustration purposes only. These values will not work in your environment.
:::

```bash
```shell
./generate_zeebe_helm_values.sh
Enter Zeebe cluster size (total number of Zeebe brokers in both Kubernetes clusters): 8

Expand Down Expand Up @@ -507,7 +507,7 @@ Use the following to set the environment variable ZEEBE_BROKER_EXPORTERS_ELASTIC

From the terminal context of `aws/dual-region/kubernetes`, execute the following:

```bash
```shell
helm install $HELM_RELEASE_NAME camunda/camunda-platform \
--version $HELM_CHART_VERSION \
--kube-context $CLUSTER_0 \
Expand All @@ -527,13 +527,13 @@ helm install $HELM_RELEASE_NAME camunda/camunda-platform \

1. Open a terminal and port-forward the Zeebe Gateway via `kubectl` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster.

```bash
```shell
kubectl --context "$CLUSTER_0" -n $CAMUNDA_NAMESPACE_0 port-forward services/$HELM_RELEASE_NAME-zeebe-gateway 26500:26500
```

2. Open another terminal and use [zbctl](../../../../../apis-tools/cli-client/cli-get-started.md) to print the Zeebe cluster status:

```bash
```shell
zbctl status --insecure --address localhost:26500
```

Expand All @@ -543,7 +543,7 @@ zbctl status --insecure --address localhost:26500
<summary>Example output</summary>
<summary>

```bash
```shell
Cluster size: 8
Partitions count: 8
Replication factor: 4
Expand Down
32 changes: 16 additions & 16 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/eks-helm.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ The following makes use of the [combined ingress setup](/self-managed/setup/guid

:::warning

Publicly exposing the Zeebe Gateway without authorization enabled can lead to severe security risks. Consider disabling the ingress for the Zeebe Gateway by setting the `zeebe-gateway.ingress.enabled` to `false`.
Publicly exposing the Zeebe Gateway without authorization enabled can lead to severe security risks. Consider disabling the ingress for the Zeebe Gateway by setting the `zeebeGateway.ingress.enabled` to `false`.

By default, authorization is enabled to ensure secure access to Zeebe. Typically, only internal components need direct access, making it unnecessary to expose Zeebe externally.

Expand All @@ -196,11 +196,11 @@ helm upgrade --install \
--version $CAMUNDA_HELM_CHART_VERSION \
--namespace camunda \
--create-namespace \
--set identity.keycloak.postgresql.enabled=false \
--set identity.keycloak.externalDatabase.host=$DB_HOST \
--set identity.keycloak.externalDatabase.user=$PG_USERNAME \
--set identity.keycloak.externalDatabase.password=$PG_PASSWORD \
--set identity.keycloak.externalDatabase.database=$DEFAULT_DB_NAME \
--set identityKeycloak.postgresql.enabled=false \
--set identityKeycloak.externalDatabase.host=$DB_HOST \
--set identityKeycloak.externalDatabase.user=$PG_USERNAME \
--set identityKeycloak.externalDatabase.password=$PG_PASSWORD \
--set identityKeycloak.externalDatabase.database=$DEFAULT_DB_NAME \
--set global.ingress.enabled=true \
--set global.ingress.host=$DOMAIN_NAME \
--set global.ingress.tls.enabled=true \
Expand All @@ -215,11 +215,11 @@ helm upgrade --install \
--set operate.contextPath="/operate" \
--set tasklist.contextPath="/tasklist" \
--set optimize.contextPath="/optimize" \
--set zeebe-gateway.ingress.enabled=true \
--set zeebe-gateway.ingress.host="zeebe.$DOMAIN_NAME" \
--set zeebe-gateway.ingress.tls.enabled=true \
--set zeebe-gateway.ingress.tls.secretName=zeebe-c8-tls \
--set-string 'zeebe-gateway.ingress.annotations.kubernetes\.io\/tls-acme=true'
--set zeebeGateway.ingress.enabled=true \
--set zeebeGateway.ingress.host="zeebe.$DOMAIN_NAME" \
--set zeebeGateway.ingress.tls.enabled=true \
--set zeebeGateway.ingress.tls.secretName=zeebe-c8-tls \
--set-string 'zeebeGateway.ingress.annotations.kubernetes\.io\/tls-acme=true'
```

The annotation `kubernetes.io/tls-acme=true` is [interpreted by cert-manager](https://cert-manager.io/docs/usage/ingress/) and automatically results in the creation of the required certificate request, easing the setup.
Expand All @@ -234,11 +234,11 @@ helm upgrade --install \
--version $CAMUNDA_HELM_CHART_VERSION \
--namespace camunda \
--create-namespace \
--set identity.keycloak.postgresql.enabled=false \
--set identity.keycloak.externalDatabase.host=$DB_HOST \
--set identity.keycloak.externalDatabase.user=$PG_USERNAME \
--set identity.keycloak.externalDatabase.password=$PG_PASSWORD \
--set identity.keycloak.externalDatabase.database=$DEFAULT_DB_NAME
--set identityKeycloak.postgresql.enabled=false \
--set identityKeycloak.externalDatabase.host=$DB_HOST \
--set identityKeycloak.externalDatabase.user=$PG_USERNAME \
--set identityKeycloak.externalDatabase.password=$PG_PASSWORD \
--set identityKeycloak.externalDatabase.database=$DEFAULT_DB_NAME
```

</TabItem>
Expand Down
33 changes: 16 additions & 17 deletions docs/self-managed/setup/deploy/amazon/amazon-eks/irsa.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,23 +159,22 @@ Don't forget to set the `serviceAccountName` of the deployment/statefulset to th
For a Helm-based deployment, you can directly configure these settings using Helm values. Below is an example of how you can incorporate these settings into your Helm chart deployment:

```yaml
identity:
keycloak:
postgresql:
enabled: false
image: docker.io/camunda/keycloak:23 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: "--db-driver=software.amazon.jdbc.Driver --transaction-xa-enabled=false --log-level=INFO,software.amazon.jdbc:INFO"
- name: KEYCLOAK_JDBC_PARAMS
value: "wrapperPlugins=iam"
- name: KEYCLOAK_JDBC_DRIVER
value: "aws-wrapper:postgresql"
externalDatabase:
host: "aurora.rds.your.domain"
port: 5432
user: keycloak
database: keycloak
identityKeycloak:
postgresql:
enabled: false
image: docker.io/camunda/keycloak:23 # use a supported and updated version listed at https://hub.docker.com/r/camunda/keycloak/tags
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: "--db-driver=software.amazon.jdbc.Driver --transaction-xa-enabled=false --log-level=INFO,software.amazon.jdbc:INFO"
- name: KEYCLOAK_JDBC_PARAMS
value: "wrapperPlugins=iam"
- name: KEYCLOAK_JDBC_DRIVER
value: "aws-wrapper:postgresql"
externalDatabase:
host: "aurora.rds.your.domain"
port: 5432
user: keycloak
database: keycloak
```

:::note
Expand Down
Loading
Loading