From 09c350739c70a623dde1405ce178b8eadae2104c Mon Sep 17 00:00:00 2001 From: HamzaMasood1 Date: Thu, 28 Sep 2023 10:34:35 +0100 Subject: [PATCH 01/33] adding es8 changes --- .../helm-kubernetes/upgrade.md | 45 +++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 99e5380f9a..1248f11843 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -95,6 +95,7 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ### v8.3 +#### Zeebe :::caution Breaking change Zeebe now runs as a non-root user by default. @@ -131,6 +132,50 @@ zeebe: runAsUser: 0 ``` +#### Elasticsearch 8 + +Firstly, make sure to follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch. This is also a good time to make sure you are not using any deprecated values when upgrading. + +##### Default values.yaml + +If you are using our default values.yaml, then no change is required from your side. You can follow the upgrade steps as normal with the updated default values.yaml. + +##### Custom values.yaml + +If you have a custom values.yaml, please take note of the following: + +Change the image repository and tag: + +```yaml +image: + repository: bitnami/elasticsearch + tag: 8.10.2 +``` + +Setting the persistent volume size of the master nodes can’t be done using the volumeClaimTemplate anymore. It must be done using the master values: + +```yaml +master: + masterOnly: false + heapSize: 1024m + persistence: + size: 64Gi +``` + +Setting a retentionPolicy for elasticsearch values can't be done anymore. You must set the retentionPolicy in the respective components instead. For example, here is an elasticsearch retentionPolicy for the Tasklist component: + +```yaml +retention: + enabled: false + minimumAge: 30d + +``` +In the global section, you can modify the host to show to release-name as well: + +```yaml +host: "{{ .Release.Name }}-elasticsearch" +``` + ### v8.2.9 #### Optimize From cbdd5a030064dca28c82542d2e39def84b41c0b2 Mon Sep 17 00:00:00 2001 From: HamzaMasood1 Date: Thu, 28 Sep 2023 10:54:45 +0100 Subject: [PATCH 02/33] running prettier --- .../helm-kubernetes/upgrade.md | 27 ++++++++++--------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 1248f11843..217e28e2e9 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -96,6 +96,7 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ### v8.3 #### Zeebe + :::caution Breaking change Zeebe now runs as a non-root user by default. @@ -134,11 +135,11 @@ zeebe: #### Elasticsearch 8 -Firstly, make sure to follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch. This is also a good time to make sure you are not using any deprecated values when upgrading. +Firstly, make sure to follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch. This is also a good time to make sure you are not using any deprecated values when upgrading. ##### Default values.yaml -If you are using our default values.yaml, then no change is required from your side. You can follow the upgrade steps as normal with the updated default values.yaml. +If you are using our default values.yaml, then no change is required from your side. You can follow the upgrade steps as normal with the updated default values.yaml. ##### Custom values.yaml @@ -148,33 +149,33 @@ Change the image repository and tag: ```yaml image: - repository: bitnami/elasticsearch - tag: 8.10.2 + repository: bitnami/elasticsearch + tag: 8.10.2 ``` Setting the persistent volume size of the master nodes can’t be done using the volumeClaimTemplate anymore. It must be done using the master values: ```yaml master: - masterOnly: false - heapSize: 1024m - persistence: - size: 64Gi -``` + masterOnly: false + heapSize: 1024m + persistence: + size: 64Gi +``` Setting a retentionPolicy for elasticsearch values can't be done anymore. You must set the retentionPolicy in the respective components instead. For example, here is an elasticsearch retentionPolicy for the Tasklist component: ```yaml retention: - enabled: false - minimumAge: 30d + enabled: false + minimumAge: 30d +``` -``` In the global section, you can modify the host to show to release-name as well: ```yaml host: "{{ .Release.Name }}-elasticsearch" -``` +``` ### v8.2.9 From 697ad722435bc71de99b5127033511e75a546298 Mon Sep 17 00:00:00 2001 From: Christina Ausley Date: Mon, 9 Oct 2023 07:17:07 -0400 Subject: [PATCH 03/33] style(formatting): technical review --- .../platform-deployment/helm-kubernetes/upgrade.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 217e28e2e9..bcce4caa92 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -135,17 +135,15 @@ zeebe: #### Elasticsearch 8 -Firstly, make sure to follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch. This is also a good time to make sure you are not using any deprecated values when upgrading. +Follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch and ensure you are not using any deprecated values when upgrading. ##### Default values.yaml -If you are using our default values.yaml, then no change is required from your side. You can follow the upgrade steps as normal with the updated default values.yaml. +If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. ##### Custom values.yaml -If you have a custom values.yaml, please take note of the following: - -Change the image repository and tag: +If you have a custom `values.yaml`, change the image repository and tag: ```yaml image: @@ -153,7 +151,7 @@ image: tag: 8.10.2 ``` -Setting the persistent volume size of the master nodes can’t be done using the volumeClaimTemplate anymore. It must be done using the master values: +Setting the persistent volume size of the master nodes can’t be done using the `volumeClaimTemplate` anymore. It must be done using the master values: ```yaml master: @@ -163,7 +161,7 @@ master: size: 64Gi ``` -Setting a retentionPolicy for elasticsearch values can't be done anymore. You must set the retentionPolicy in the respective components instead. For example, here is an elasticsearch retentionPolicy for the Tasklist component: +Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. You must set the `retentionPolicy` in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: ```yaml retention: From ae3f62374c573b6d99e194b687a0c41c06c77f26 Mon Sep 17 00:00:00 2001 From: HamzaMasood1 Date: Mon, 9 Oct 2023 12:35:58 +0100 Subject: [PATCH 04/33] es8 data retention guide --- .../helm-kubernetes/upgrade.md | 67 ++++++++++++++++++- 1 file changed, 64 insertions(+), 3 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index bcce4caa92..13c78e2b8d 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -133,15 +133,15 @@ zeebe: runAsUser: 0 ``` -#### Elasticsearch 8 +#### Elasticsearch 8 Upgrade Guide Follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch and ensure you are not using any deprecated values when upgrading. -##### Default values.yaml +##### 1. Default values.yaml If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. -##### Custom values.yaml +##### 2. Custom values.yaml If you have a custom `values.yaml`, change the image repository and tag: @@ -175,6 +175,67 @@ In the global section, you can modify the host to show to release-name as well: host: "{{ .Release.Name }}-elasticsearch" ``` +#### Elasticsearch 8 Data Retention Strategy + +You may have noticed that new volumes have been created for Elasticsearch after upgrading to ES8. Your previous data still exists but is not currently being utilized. The following are various approaches you can use in order to utilize your previous data once again: + +##### First Option: CSI Volume Cloning +This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. + +Prerequisites: +1. Your kubernetes version needs to be greater than 1.20 +2. The CSI driver must be present on your cluster + +Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. + +Here is an example yaml file for cloning the elasticsearch PVC: + +```yaml +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + app.kubernetes.io/component: master + app.kubernetes.io/instance: integration + app.kubernetes.io/name: elasticsearch + name: data-integration-elasticsearch-master-0 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 64Gi + dataSource: + name: elasticsearch-master-elasticsearch-master-0 + kind: PersistentVolumeClaim +``` + +Before applying this manifest, please make sure to scale the elasticsearch replicas to 0. Also, +make sure that the `dataSource.name` matches the pvc that you would like to clone. + +Reference: https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/ + +##### Second Option: Manual Approach + +With this approach, the following steps must be followed **before** the installation: + +1. Take note of the PV name and ID for both elasticsearch master PVs +2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. +You can run the following command to do so: +``` +kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' +``` +3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: +``` +claimRef: + apiVersion: v1 + kind: PersistentVolumeClaim + name: data--elasticsearch-master-0 + namespace: +``` +4. With the above steps completed, you can upgrade as normal. The newly generated PVCs should bind with the existing PVs +5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. + ### v8.2.9 #### Optimize From e61368628e9e976dc8a23c0d54098e710a0b673e Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:41:59 +0100 Subject: [PATCH 05/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 13c78e2b8d..69a74c3dc9 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -180,6 +180,7 @@ host: "{{ .Release.Name }}-elasticsearch" You may have noticed that new volumes have been created for Elasticsearch after upgrading to ES8. Your previous data still exists but is not currently being utilized. The following are various approaches you can use in order to utilize your previous data once again: ##### First Option: CSI Volume Cloning + This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. Prerequisites: From 469ae3acb237c51345a7cdbbdb9fbaeb9d7a80ce Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:42:22 +0100 Subject: [PATCH 06/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 69a74c3dc9..9ce22065c6 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -184,6 +184,7 @@ You may have noticed that new volumes have been created for Elasticsearch after This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. Prerequisites: + 1. Your kubernetes version needs to be greater than 1.20 2. The CSI driver must be present on your cluster From ee03a13ccf2c19f0635df14700c47f1084f518d8 Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:43:59 +0100 Subject: [PATCH 07/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- .../platform-deployment/helm-kubernetes/upgrade.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 9ce22065c6..3135e8d3c5 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -223,7 +223,8 @@ With this approach, the following steps must be followed **before** the installa 1. Take note of the PV name and ID for both elasticsearch master PVs 2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. -You can run the following command to do so: + You can run the following command to do so: + ``` kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ``` From 56f8f39367f2e9d83d102dcce982f8fad0af14b6 Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:44:38 +0100 Subject: [PATCH 08/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 3135e8d3c5..bec764efd3 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -228,6 +228,7 @@ With this approach, the following steps must be followed **before** the installa ``` kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ``` + 3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: ``` claimRef: From 065814d99eaebaf3ea9815ea089701185358a1e5 Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:47:41 +0100 Subject: [PATCH 09/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index bec764efd3..a286c4b48a 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -230,6 +230,7 @@ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Re ``` 3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: + ``` claimRef: apiVersion: v1 From 27dae06ba9937cee4a478ce9110a5a7b11debbbd Mon Sep 17 00:00:00 2001 From: Hamza Masood <47217263+HamzaMasood1@users.noreply.github.com> Date: Mon, 9 Oct 2023 12:47:56 +0100 Subject: [PATCH 10/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index a286c4b48a..2f89ee9eea 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -238,6 +238,7 @@ claimRef: name: data--elasticsearch-master-0 namespace: ``` + 4. With the above steps completed, you can upgrade as normal. The newly generated PVCs should bind with the existing PVs 5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. From 078b6b296c4cd2a14be8bcad4708446e91efaedd Mon Sep 17 00:00:00 2001 From: HamzaMasood1 Date: Mon, 9 Oct 2023 13:13:52 +0100 Subject: [PATCH 11/33] init containers --- .../platform-deployment/helm-kubernetes/upgrade.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 2f89ee9eea..16ac780069 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -242,6 +242,11 @@ claimRef: 4. With the above steps completed, you can upgrade as normal. The newly generated PVCs should bind with the existing PVs 5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. + +#### Init Containers + +Init Containers are now available for all components. The `extraInitContainers` value is now deprecated in favour of `initContainers`. + ### v8.2.9 #### Optimize From df5cfb6c08a3a1de03c39edeac70f0b170a3ba61 Mon Sep 17 00:00:00 2001 From: christinaausley <84338309+christinaausley@users.noreply.github.com> Date: Mon, 9 Oct 2023 06:18:10 -0600 Subject: [PATCH 12/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- .../platform-deployment/helm-kubernetes/upgrade.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 16ac780069..628c8f4733 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -239,7 +239,8 @@ claimRef: namespace: ``` -4. With the above steps completed, you can upgrade as normal. The newly generated PVCs should bind with the existing PVs + + 5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. From 95d73e20996e357099c2892993b300e3e19b1147 Mon Sep 17 00:00:00 2001 From: christinaausley <84338309+christinaausley@users.noreply.github.com> Date: Mon, 9 Oct 2023 06:18:18 -0600 Subject: [PATCH 13/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 628c8f4733..b262cbca99 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -243,7 +243,6 @@ claimRef: 5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. - #### Init Containers Init Containers are now available for all components. The `extraInitContainers` value is now deprecated in favour of `initContainers`. From 9484ef32e082d1347e78a54377fa96062726b9de Mon Sep 17 00:00:00 2001 From: christinaausley <84338309+christinaausley@users.noreply.github.com> Date: Mon, 9 Oct 2023 06:18:27 -0600 Subject: [PATCH 14/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- .../self-managed/platform-deployment/helm-kubernetes/upgrade.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index b262cbca99..4fc61948e1 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -203,7 +203,7 @@ metadata: name: data-integration-elasticsearch-master-0 spec: accessModes: - - ReadWriteOnce + - ReadWriteOnce resources: requests: storage: 64Gi From 9d0539c9b385f00958d58cb287d1b2fe2adacea9 Mon Sep 17 00:00:00 2001 From: christinaausley <84338309+christinaausley@users.noreply.github.com> Date: Mon, 9 Oct 2023 06:18:57 -0600 Subject: [PATCH 15/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- .../platform-deployment/helm-kubernetes/upgrade.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 4fc61948e1..92cfbcffc1 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -185,7 +185,8 @@ This method will take advantage of the CSI Volume Cloning functionality from the Prerequisites: -1. Your kubernetes version needs to be greater than 1.20 + + 2. The CSI driver must be present on your cluster Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. From 7bbf2415daf0870e9d60d9c0f973fc6f824050f8 Mon Sep 17 00:00:00 2001 From: christinaausley <84338309+christinaausley@users.noreply.github.com> Date: Mon, 9 Oct 2023 06:19:06 -0600 Subject: [PATCH 16/33] Update docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --- .../platform-deployment/helm-kubernetes/upgrade.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 92cfbcffc1..ed81d5826e 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -232,7 +232,8 @@ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Re 3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: -``` + + claimRef: apiVersion: v1 kind: PersistentVolumeClaim From 1a3d8a8c01a1d9f487450e6b0387f38eace72c65 Mon Sep 17 00:00:00 2001 From: Christina Ausley Date: Mon, 9 Oct 2023 08:20:18 -0400 Subject: [PATCH 17/33] prettier --- .../helm-kubernetes/upgrade.md | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index ed81d5826e..dd2d4da89c 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -185,8 +185,6 @@ This method will take advantage of the CSI Volume Cloning functionality from the Prerequisites: - - 2. The CSI driver must be present on your cluster Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. @@ -232,14 +230,13 @@ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Re 3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: - - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: data--elasticsearch-master-0 - namespace: -``` +apiVersion: v1 +kind: PersistentVolumeClaim +name: data--elasticsearch-master-0 +namespace: + +```` @@ -295,7 +292,7 @@ First, generate the Connectors secret: helm template camunda/camunda-platform --version 8.2 \ --show-only charts/identity/templates/connectors-secret.yaml > identity-connectors-secret.yaml -``` +```` Then apply it: From 3358b91b8b984e10b0fc10cb7bd218d21ce7ba92 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 15:55:43 +0200 Subject: [PATCH 18/33] tidy up elasticsearch upgrade guide --- .../helm-kubernetes/upgrade.md | 143 ++++++++++-------- 1 file changed, 80 insertions(+), 63 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index dd2d4da89c..f8adcecedd 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -95,14 +95,20 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ### v8.3 -#### Zeebe - -:::caution Breaking change +:::caution Breaking Changes -Zeebe now runs as a non-root user by default. +- Elasticsearch upgraded from v7.x to v8.x. +- Keycloak upgraded from v19.x to v22.x. +- Zeebe now runs as a non-root user by default. ::: +#### Init Containers + +Init Containers are now available for all components. The `extraInitContainers` value is now deprecated in favor of `initContainers`. + +#### Zeebe + Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: 1. (Recommended) Change the `podSecurityContext fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID 1000. `fsGroup` will modify the group permissions of all persistent volumes attached to that pod. @@ -133,63 +139,41 @@ zeebe: runAsUser: 0 ``` -#### Elasticsearch 8 Upgrade Guide - -Follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch and ensure you are not using any deprecated values when upgrading. +#### Elasticsearch -##### 1. Default values.yaml +##### Elasticsearch - Data retention -If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. - -##### 2. Custom values.yaml +The Elasticsearch 8 chart is using different PVC names, hence, it's required to migrate the old PVCs to the new names. Which could be done in two ways, automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). -If you have a custom `values.yaml`, change the image repository and tag: - -```yaml -image: - repository: bitnami/elasticsearch - tag: 8.10.2 -``` - -Setting the persistent volume size of the master nodes can’t be done using the `volumeClaimTemplate` anymore. It must be done using the master values: - -```yaml -master: - masterOnly: false - heapSize: 1024m - persistence: - size: 64Gi -``` +:::caution -Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. You must set the `retentionPolicy` in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: +In call cases, the following steps must be executed **before** the installation. -```yaml -retention: - enabled: false - minimumAge: 30d -``` +::: -In the global section, you can modify the host to show to release-name as well: +###### Option One: CSI Volume Cloning -```yaml -host: "{{ .Release.Name }}-elasticsearch" -``` +This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. -#### Elasticsearch 8 Data Retention Strategy +Prerequisites: -You may have noticed that new volumes have been created for Elasticsearch after upgrading to ES8. Your previous data still exists but is not currently being utilized. The following are various approaches you can use in order to utilize your previous data once again: +1. The Kubernetes cluster should be at least v1.20 +2. The CSI driver must be present on your cluster -##### First Option: CSI Volume Cloning +Clones are provisioned like any other PVC with a reference to an existing PVC in the same namespace. -This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. +Before applying this manifest, ensure to scale the Elasticsearch replicas to 0. Also, +ensure that the `dataSource.name` matches the PVC that you would like to clone. -Prerequisites: +Here is an example YAML file for cloning the Elasticsearch PVC: -2. The CSI driver must be present on your cluster +First, stop Elasticsearch Pods: -Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace. +```shell +kubectl scale statefulset elasticsearch-master --replicas=0 +``` -Here is an example yaml file for cloning the elasticsearch PVC: +Then, clone the PVC (this example for one PVC, usually you have two PVCs): ```yaml apiVersion: v1 @@ -211,40 +195,73 @@ spec: kind: PersistentVolumeClaim ``` -Before applying this manifest, please make sure to scale the elasticsearch replicas to 0. Also, -make sure that the `dataSource.name` matches the pvc that you would like to clone. - -Reference: https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/ +Reference: [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). -##### Second Option: Manual Approach +##### Option Two: Manual Approach -With this approach, the following steps must be followed **before** the installation: +This approach works with any Kubernetes cluster. -1. Take note of the PV name and ID for both elasticsearch master PVs +1. Take note of the PV name and ID for both Elasticsearch master PVs 2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. You can run the following command to do so: -``` +```shell kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ``` 3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: +```yaml claimRef: -apiVersion: v1 -kind: PersistentVolumeClaim -name: data--elasticsearch-master-0 -namespace: + apiVersion: v1 + kind: PersistentVolumeClaim + name: data--elasticsearch-master-0 + namespace: +``` -```` +5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. +##### Elasticsearch - Values File +Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. -5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. +###### Case One: Default values.yaml -#### Init Containers +If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. + +###### Case Two: Custom values.yaml + +If you have a custom `values.yaml`, change the image repository and tag: + +```yaml +image: + repository: bitnami/elasticsearch + tag: 8.10.2 +``` + +Setting the persistent volume size of the master nodes can’t be done using the `volumeClaimTemplate` anymore. It must be done using the master values: + +```yaml +master: + masterOnly: false + heapSize: 1024m + persistence: + size: 64Gi +``` -Init Containers are now available for all components. The `extraInitContainers` value is now deprecated in favour of `initContainers`. +Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. You must set the `retentionPolicy` in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: + +```yaml +retention: + enabled: false + minimumAge: 30d +``` + +In the global section, you can modify the host to show to release-name as well: + +```yaml +host: "{{ .Release.Name }}-elasticsearch" +``` ### v8.2.9 @@ -292,7 +309,7 @@ First, generate the Connectors secret: helm template camunda/camunda-platform --version 8.2 \ --show-only charts/identity/templates/connectors-secret.yaml > identity-connectors-secret.yaml -```` +``` Then apply it: From 1b968cd63ddc27b3aaddb89ceda12866e8efa95b Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 18:32:18 +0200 Subject: [PATCH 19/33] tidy up es docs --- .../platform-deployment/helm-kubernetes/upgrade.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index f8adcecedd..c7a6102394 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -99,14 +99,10 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke - Elasticsearch upgraded from v7.x to v8.x. - Keycloak upgraded from v19.x to v22.x. -- Zeebe now runs as a non-root user by default. +- Zeebe runs as a non-root user by default. ::: -#### Init Containers - -Init Containers are now available for all components. The `extraInitContainers` value is now deprecated in favor of `initContainers`. - #### Zeebe Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: @@ -147,7 +143,7 @@ The Elasticsearch 8 chart is using different PVC names, hence, it's required to :::caution -In call cases, the following steps must be executed **before** the installation. +In call cases, the following steps must be executed **before** the upgrade. ::: From e264ddcf665a24b04959bb6776ea048be5fe2c0a Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 18:33:50 +0200 Subject: [PATCH 20/33] tidy up es docs --- .../helm-kubernetes/upgrade.md | 66 +++++++++---------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index c7a6102394..07421daee5 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -93,7 +93,7 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ## Version update instructions -### v8.3 +### v8.3.0 :::caution Breaking Changes @@ -103,38 +103,6 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ::: -#### Zeebe - -Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: - -1. (Recommended) Change the `podSecurityContext fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID 1000. `fsGroup` will modify the group permissions of all persistent volumes attached to that pod. - -```yaml -zeebe: - podSecurityContext: - fsGroup: 1000 -``` - -If you already modify the current running user, then the `fsGroup` needs to be changed to match the UID. - -```yaml -zeebe: - containerSecurityContext: - runAsUser: 1008 - podSecurityContext: - fsGroup: 1008 -``` - -Some storage classes may not support the `fsGroup` option. In this case, a possibility is to run a debug pod to chown the mounted volumes. - -2. If the recommended solution does not help, you may change the running user back to root. - -```yaml -zeebe: - containerSecurityContext: - runAsUser: 0 -``` - #### Elasticsearch ##### Elasticsearch - Data retention @@ -272,6 +240,38 @@ For Optimize 3.10.1, a new environment variable introduced redirection URL. Howe No action is needed if you use Optimize 3.10.3 (shipped with this Helm chart version by default), but this Optimize version cannot be used out of the box with previous Helm chart versions. +#### Zeebe + +Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: + +1. (Recommended) Change the `podSecurityContext fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID 1000. `fsGroup` will modify the group permissions of all persistent volumes attached to that pod. + +```yaml +zeebe: + podSecurityContext: + fsGroup: 1000 +``` + +If you already modify the current running user, then the `fsGroup` needs to be changed to match the UID. + +```yaml +zeebe: + containerSecurityContext: + runAsUser: 1008 + podSecurityContext: + fsGroup: 1008 +``` + +Some storage classes may not support the `fsGroup` option. In this case, a possibility is to run a debug pod to chown the mounted volumes. + +2. If the recommended solution does not help, you may change the running user back to root. + +```yaml +zeebe: + containerSecurityContext: + runAsUser: 0 +``` + ### v8.2.3 #### Zeebe Gateway From dce635843520d33a8fe53a4d6de2e3333b10fe3a Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 18:38:25 +0200 Subject: [PATCH 21/33] add keycloak and web-modeler sections --- .../helm-kubernetes/upgrade.md | 34 +++++++++++-------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 07421daee5..8f9b92a6a7 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -93,7 +93,7 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ## Version update instructions -### v8.3.0 +### v8.3.0 (Minor) :::caution Breaking Changes @@ -227,18 +227,7 @@ In the global section, you can modify the host to show to release-name as well: host: "{{ .Release.Name }}-elasticsearch" ``` -### v8.2.9 - -#### Optimize - -For Optimize 3.10.1, a new environment variable introduced redirection URL. However, the change is not compatible with Camunda Helm charts until it is fixed in 3.10.3 (and Helm chart 8.2.9). Therefore, those versions are coupled to certain Camunda Helm chart versions: - -| Optimize version | Camunda Helm chart version | -| --------------------------------- | -------------------------- | -| Optimize 3.10.1 & Optimize 3.10.2 | 8.2.0 - 8.2.8 | -| Optimize 3.10.3 | 8.2.9+ | - -No action is needed if you use Optimize 3.10.3 (shipped with this Helm chart version by default), but this Optimize version cannot be used out of the box with previous Helm chart versions. +#### Keycloak #### Zeebe @@ -272,6 +261,23 @@ zeebe: runAsUser: 0 ``` +#### Web-Modeler + +TBA + +### v8.2.9 + +#### Optimize + +For Optimize 3.10.1, a new environment variable introduced redirection URL. However, the change is not compatible with Camunda Helm charts until it is fixed in 3.10.3 (and Helm chart 8.2.9). Therefore, those versions are coupled to certain Camunda Helm chart versions: + +| Optimize version | Camunda Helm chart version | +| --------------------------------- | -------------------------- | +| Optimize 3.10.1 & Optimize 3.10.2 | 8.2.0 - 8.2.8 | +| Optimize 3.10.3 | 8.2.9+ | + +No action is needed if you use Optimize 3.10.3 (shipped with this Helm chart version by default), but this Optimize version cannot be used out of the box with previous Helm chart versions. + ### v8.2.3 #### Zeebe Gateway @@ -291,7 +297,7 @@ To authenticate: - [Desktop Modeler](/docs/components/modeler/desktop-modeler/connect-to-camunda-8.md). - [Zeebe client (zbctl)](/docs/self-managed/zeebe-deployment/security/secure-client-communication/#zbctl). -### v8.2 +### v8.2.0 (Minor) #### Connectors From 00448704db7a274e60bd4feac41d008e4485273a Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 18:46:30 +0200 Subject: [PATCH 22/33] add Web-Modeler changes --- .../helm-kubernetes/upgrade.md | 22 ++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 8f9b92a6a7..4b8e1a2a8b 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -263,7 +263,27 @@ zeebe: #### Web-Modeler -TBA +The configuration format of external database has been changed in Web-Modeler from `host`, `port`, `database` to `JDBC URL`. + +The old format: + +```yaml +webModeler: + restapi: + externalDatabase: + host: web-modeler-postgres-ext + port: 5432 + database: rest-api-db +``` + +The new format: + +```yaml +webModeler: + restapi: + externalDatabase: + url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db" +``` ### v8.2.9 From 0bae396ed60b603e405bcdac1a5a88a0664817e8 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 18:51:36 +0200 Subject: [PATCH 23/33] add Optimize migration initContainer --- .../platform-deployment/helm-kubernetes/upgrade.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 4b8e1a2a8b..0051810e91 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -285,6 +285,10 @@ webModeler: url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db" ``` +#### Optimize + +A predefined initContainer added for automatic migration. + ### v8.2.9 #### Optimize From c4e38d84e98981fb4014ae97084088ea1d5a038a Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 19:00:03 +0200 Subject: [PATCH 24/33] add Keycloak section --- .../self-managed/platform-deployment/helm-kubernetes/upgrade.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 0051810e91..3f4ffe528a 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -229,6 +229,8 @@ host: "{{ .Release.Name }}-elasticsearch" #### Keycloak +Keycloak upgraded from v19.x to v22.x which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because the Keycloak major version upgrade. Ensure to back-up Keycloak database before the upgrade. + #### Zeebe Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: From 57c9e66732070c40b52f89c26d682e40dd4134dd Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 19:07:22 +0200 Subject: [PATCH 25/33] tidy up --- .../helm-kubernetes/upgrade.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 3f4ffe528a..0a2e824197 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -105,6 +105,8 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke #### Elasticsearch +Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. + ##### Elasticsearch - Data retention The Elasticsearch 8 chart is using different PVC names, hence, it's required to migrate the old PVCs to the new names. Which could be done in two ways, automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). @@ -115,7 +117,7 @@ In call cases, the following steps must be executed **before** the upgrade. ::: -###### Option One: CSI Volume Cloning +**Option One:** CSI Volume Cloning This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. @@ -161,7 +163,7 @@ spec: Reference: [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). -##### Option Two: Manual Approach +**Option Two**: Manual Approach This approach works with any Kubernetes cluster. @@ -187,23 +189,23 @@ claimRef: ##### Elasticsearch - Values File -Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. +The syntax of the chart values file has been changed due to the upgrade. There are two cases based if you use the default values or custom values. -###### Case One: Default values.yaml +**Case One:** Default values.yaml If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. -###### Case Two: Custom values.yaml +**Case Two:** Custom values.yaml If you have a custom `values.yaml`, change the image repository and tag: ```yaml image: repository: bitnami/elasticsearch - tag: 8.10.2 + tag: 8.6.2 ``` -Setting the persistent volume size of the master nodes can’t be done using the `volumeClaimTemplate` anymore. It must be done using the master values: +Setting the persistent volume size of the master nodes can't be done using the `volumeClaimTemplate` anymore. It must be done using the master values: ```yaml master: @@ -213,7 +215,7 @@ master: size: 64Gi ``` -Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. You must set the `retentionPolicy` in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: +Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. The `retentionPolicy` should be used in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: ```yaml retention: @@ -221,7 +223,7 @@ retention: minimumAge: 30d ``` -In the global section, you can modify the host to show to release-name as well: +In the global section, the host to show to release-name should be changed as well: ```yaml host: "{{ .Release.Name }}-elasticsearch" From 13ca626bde61c5c567f3c2c8a9a02ccf54ed3634 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 19:12:11 +0200 Subject: [PATCH 26/33] rewrod --- .../platform-deployment/helm-kubernetes/upgrade.md | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 0a2e824197..7835ea68bb 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -237,7 +237,9 @@ Keycloak upgraded from v19.x to v22.x which is the latest version at the time of Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: -1. (Recommended) Change the `podSecurityContext fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID 1000. `fsGroup` will modify the group permissions of all persistent volumes attached to that pod. +**Option One:** Use Zeebe user ID (Recommended) + +Change `podSecurityContext.fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID `1000`. That will modify the group permissions of all persistent volumes attached to that Pod. ```yaml zeebe: @@ -255,9 +257,11 @@ zeebe: fsGroup: 1008 ``` -Some storage classes may not support the `fsGroup` option. In this case, a possibility is to run a debug pod to chown the mounted volumes. +Some storage classes may not support the `fsGroup` option. In this case, a possibility is to run a debug Pod to chown the mounted volumes. + +**Option Two:** Use root user ID -2. If the recommended solution does not help, you may change the running user back to root. +If the recommended solution does not help, you may change the running user back to root. ```yaml zeebe: From 240fb0ea17cf0b018f696d7198b611df88173350 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 19:24:28 +0200 Subject: [PATCH 27/33] enhance es steps --- .../helm-kubernetes/upgrade.md | 107 ++++++++++-------- 1 file changed, 60 insertions(+), 47 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 7835ea68bb..37afdc8a37 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -107,6 +107,48 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. +##### Elasticsearch - Values File + +The syntax of the chart values file has been changed due to the upgrade. There are two cases based if you use the default values or custom values. + +**Case One:** Default values.yaml + +If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. + +**Case Two:** Custom values.yaml + +If you have a custom `values.yaml`, change the image repository and tag: + +```yaml +image: + repository: bitnami/elasticsearch + tag: 8.6.2 +``` + +Setting the persistent volume size of the master nodes can't be done using the `volumeClaimTemplate` anymore. It must be done using the master values: + +```yaml +master: + masterOnly: false + heapSize: 1024m + persistence: + size: 64Gi +``` + +Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. The `retentionPolicy` should be used in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: + +```yaml +retention: + enabled: false + minimumAge: 30d +``` + +In the global section, the host to show to release-name should be changed as well: + +```yaml +host: "{{ .Release.Name }}-elasticsearch" +``` + ##### Elasticsearch - Data retention The Elasticsearch 8 chart is using different PVC names, hence, it's required to migrate the old PVCs to the new names. Which could be done in two ways, automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). @@ -163,71 +205,42 @@ spec: Reference: [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). -**Option Two**: Manual Approach +**Option Two**: Update PV Manually This approach works with any Kubernetes cluster. -1. Take note of the PV name and ID for both Elasticsearch master PVs +1. Get the name of PV for both Elasticsearch master PVs 2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. - You can run the following command to do so: -```shell -kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' -``` +First, get the PV from PVC: -3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: - -```yaml -claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: data--elasticsearch-master-0 - namespace: +```shell +ES_PV_NAME0="$(kubectl get pvc elasticsearch-master-elasticsearch-master-0 -o jsonpath='{.spec.volumeName}')" ``` -5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. - -##### Elasticsearch - Values File - -The syntax of the chart values file has been changed due to the upgrade. There are two cases based if you use the default values or custom values. - -**Case One:** Default values.yaml - -If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`. +Then, change the Reclaim Policy: -**Case Two:** Custom values.yaml - -If you have a custom `values.yaml`, change the image repository and tag: - -```yaml -image: - repository: bitnami/elasticsearch - tag: 8.6.2 +```shell +kubectl patch pv "${ES_PV_NAME0}" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ``` -Setting the persistent volume size of the master nodes can't be done using the `volumeClaimTemplate` anymore. It must be done using the master values: +Finally, verfify that the Reclaim Policy has been changed: -```yaml -master: - masterOnly: false - heapSize: 1024m - persistence: - size: 64Gi +```shell +kubectl get pv "${ES_PV_NAME0}" | grep Retain || echo '[ERROR] Reclaim Policy is not Retain!' ``` -Setting a `retentionPolicy` for Elasticsearch values can't be done anymore. The `retentionPolicy` should be used in the respective components instead. For example, here is an Elasticsearch `retentionPolicy` for the Tasklist component: +Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example: ```yaml -retention: - enabled: false - minimumAge: 30d +claimRef: + apiVersion: v1 + kind: PersistentVolumeClaim + name: data--elasticsearch-master-0 + namespace: ``` -In the global section, the host to show to release-name should be changed as well: - -```yaml -host: "{{ .Release.Name }}-elasticsearch" -``` +After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state. Then, proceed with the upgrade. #### Keycloak From d85cc43cd34f7d16295cdfeeb63b2d3535f740f1 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 20:14:02 +0200 Subject: [PATCH 28/33] add notes about Keycloak PostgreSQL --- .../platform-deployment/helm-kubernetes/upgrade.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 37afdc8a37..5e80a62645 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -246,6 +246,15 @@ After a successful upgrade, you can now delete the old PVCs that are in a `Lost` Keycloak upgraded from v19.x to v22.x which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because the Keycloak major version upgrade. Ensure to back-up Keycloak database before the upgrade. +It is worth mentioning that the Keycloak PostgreSQL chart shows some warnings which are irrelative and safe to ignore. That false positive issue has been reported, and it should be fixed in the next releases of the upstream PostgreSQL Helm chart. + +``` +coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.egressRules.customRules is a table. Ignoring non-table value ([]) +coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.readReplicasAccessOnlyFrom.customRules is a table. Ignoring non-table value ([]) +coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.primaryAccessOnlyFrom.customRules is a table. Ignoring non-table value ([]) +false +``` + #### Zeebe Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: From c9c41996ace7bb8170242f5813584a171217b13d Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 20:19:50 +0200 Subject: [PATCH 29/33] update operational guides --- .../operational-guides/update-guide/820-to-830.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/self-managed/operational-guides/update-guide/820-to-830.md b/docs/self-managed/operational-guides/update-guide/820-to-830.md index 5a0934dc23..be16ed0d46 100644 --- a/docs/self-managed/operational-guides/update-guide/820-to-830.md +++ b/docs/self-managed/operational-guides/update-guide/820-to-830.md @@ -8,6 +8,10 @@ description: "Review which adjustments must be made to migrate from Camunda 8.2. The following sections explain which adjustments must be made to migrate from Camunda 8.2.x to 8.3.0 for each component. +## Helm chart - Breaking Changes + +For more details about the breaking changes in the Helm chart, check the [upgrade page for v8.3.0](../../platform-deployment/helm-kubernetes/upgrade.md#v830-minor). + ## Zeebe - Breaking Changes ### Zeebe Docker image now runs with unprivileged user by default From 6c9ed4010c3e0686e9dd1c5c122d909e39498a9a Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 20:27:18 +0200 Subject: [PATCH 30/33] link to helm chart repo --- docs/reference/release-notes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/reference/release-notes.md b/docs/reference/release-notes.md index 787d689f60..c739a4dfd2 100644 --- a/docs/reference/release-notes.md +++ b/docs/reference/release-notes.md @@ -6,7 +6,7 @@ description: "Release notes for Camunda 8 and its components." Release notes for Camunda 8, including alphas, are available on [GitHub](https://github.com/camunda/camunda-platform/releases). This includes release assets and release notes for Zeebe, Operate, Tasklist, and Identity. -The current release notes can be found [here](https://github.com/camunda/camunda-platform/releases/latest). +The current release notes can be found on [Camunda repository](https://github.com/camunda/camunda-platform/releases/latest) and [Camunda Helm repository](https://github.com/camunda/camunda-platform-helm/releases/latest). [Update guides](/self-managed/operational-guides/update-guide/introduction.md) include links to both release notes and release blogs. From 12c248a153577c6f9feec7805fba6ba8479dd5b6 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 20:33:25 +0200 Subject: [PATCH 31/33] link to change log for more details --- .../platform-deployment/helm-kubernetes/upgrade.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 5e80a62645..e6c06f8a74 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -103,6 +103,8 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ::: +For full change log, view Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). + #### Elasticsearch Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. @@ -315,10 +317,6 @@ webModeler: url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db" ``` -#### Optimize - -A predefined initContainer added for automatic migration. - ### v8.2.9 #### Optimize From 2845c634ade87fb16fec775f02412321a58859a1 Mon Sep 17 00:00:00 2001 From: Ahmed AbouZaid <6760103+aabouzaid@users.noreply.github.com> Date: Mon, 9 Oct 2023 20:34:00 +0200 Subject: [PATCH 32/33] link to change log for more details --- .../platform-deployment/helm-kubernetes/upgrade.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index e6c06f8a74..1e5700a431 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -95,6 +95,8 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ### v8.3.0 (Minor) +For full change log, view Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). + :::caution Breaking Changes - Elasticsearch upgraded from v7.x to v8.x. @@ -103,8 +105,6 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ::: -For full change log, view Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). - #### Elasticsearch Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. From a87783a6f9b4c51ca7c73dedf7761793514bed3e Mon Sep 17 00:00:00 2001 From: Christina Ausley Date: Mon, 9 Oct 2023 14:55:12 -0400 Subject: [PATCH 33/33] style(formatting): technical review --- docs/reference/release-notes.md | 2 +- .../helm-kubernetes/upgrade.md | 45 ++++++++++--------- 2 files changed, 24 insertions(+), 23 deletions(-) diff --git a/docs/reference/release-notes.md b/docs/reference/release-notes.md index c739a4dfd2..2f30cce23d 100644 --- a/docs/reference/release-notes.md +++ b/docs/reference/release-notes.md @@ -6,7 +6,7 @@ description: "Release notes for Camunda 8 and its components." Release notes for Camunda 8, including alphas, are available on [GitHub](https://github.com/camunda/camunda-platform/releases). This includes release assets and release notes for Zeebe, Operate, Tasklist, and Identity. -The current release notes can be found on [Camunda repository](https://github.com/camunda/camunda-platform/releases/latest) and [Camunda Helm repository](https://github.com/camunda/camunda-platform-helm/releases/latest). +The current release notes can be found on the [Camunda repository](https://github.com/camunda/camunda-platform/releases/latest) and [Camunda Helm repository](https://github.com/camunda/camunda-platform-helm/releases/latest). [Update guides](/self-managed/operational-guides/update-guide/introduction.md) include links to both release notes and release blogs. diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md index 1e5700a431..2d259ff4c7 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md @@ -93,9 +93,9 @@ For more details on the Keycloak upgrade path, you can also read the [Bitnami Ke ## Version update instructions -### v8.3.0 (Minor) +### v8.3.0 (minor) -For full change log, view Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). +For full change log, view the Camunda Helm chart [v8.3.0 release notes](https://github.com/camunda/camunda-platform-helm/releases/tag/camunda-platform-8.3.0). :::caution Breaking Changes @@ -107,11 +107,11 @@ For full change log, view Camunda Helm chart [v8.3.0 release notes](https://gith #### Elasticsearch -Elasticsearch upgraded from v7.x to v8.x. Follow Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. +Elasticsearch upgraded from v7.x to v8.x. Follow the Elasticsearch official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) to ensure you are not using any deprecated values when upgrading. -##### Elasticsearch - Values File +##### Elasticsearch - values file -The syntax of the chart values file has been changed due to the upgrade. There are two cases based if you use the default values or custom values. +The syntax of the chart values file has been changed due to the upgrade. There are two cases based on if you use the default values or custom values. **Case One:** Default values.yaml @@ -153,7 +153,7 @@ host: "{{ .Release.Name }}-elasticsearch" ##### Elasticsearch - Data retention -The Elasticsearch 8 chart is using different PVC names, hence, it's required to migrate the old PVCs to the new names. Which could be done in two ways, automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). +The Elasticsearch 8 chart is using different PVC names. Therefore, it's required to migrate the old PVCs to the new names, which could be done in two ways: automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup). :::caution @@ -161,29 +161,28 @@ In call cases, the following steps must be executed **before** the upgrade. ::: -**Option One:** CSI Volume Cloning +**Option One:** CSI volume cloning -This method will take advantage of the CSI Volume Cloning functionality from the CSI driver. +This method will take advantage of the CSI volume cloning functionality from the CSI driver. Prerequisites: -1. The Kubernetes cluster should be at least v1.20 -2. The CSI driver must be present on your cluster +1. The Kubernetes cluster should be at least v1.20. +2. The CSI driver must be present on your cluster. Clones are provisioned like any other PVC with a reference to an existing PVC in the same namespace. -Before applying this manifest, ensure to scale the Elasticsearch replicas to 0. Also, -ensure that the `dataSource.name` matches the PVC that you would like to clone. +Before applying this manifest, ensure to scale the Elasticsearch replicas to 0. Also, ensure the `dataSource.name` matches the PVC that you would like to clone. Here is an example YAML file for cloning the Elasticsearch PVC: -First, stop Elasticsearch Pods: +First, stop Elasticsearch pods: ```shell kubectl scale statefulset elasticsearch-master --replicas=0 ``` -Then, clone the PVC (this example for one PVC, usually you have two PVCs): +Then, clone the PVC (this example is for one PVC, usually you have two PVCs): ```yaml apiVersion: v1 @@ -205,13 +204,13 @@ spec: kind: PersistentVolumeClaim ``` -Reference: [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). +For reference, visit [Kubernetes - CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/). -**Option Two**: Update PV Manually +**Option Two**: Update PV manually This approach works with any Kubernetes cluster. -1. Get the name of PV for both Elasticsearch master PVs +1. Get the name of PV for both Elasticsearch master PVs. 2. Change the reclaim policy of the Elasticsearch PVs to `Retain`. First, get the PV from PVC: @@ -226,7 +225,7 @@ Then, change the Reclaim Policy: kubectl patch pv "${ES_PV_NAME0}" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' ``` -Finally, verfify that the Reclaim Policy has been changed: +Finally, verify the Reclaim Policy has been changed: ```shell kubectl get pv "${ES_PV_NAME0}" | grep Retain || echo '[ERROR] Reclaim Policy is not Retain!' @@ -246,9 +245,11 @@ After a successful upgrade, you can now delete the old PVCs that are in a `Lost` #### Keycloak -Keycloak upgraded from v19.x to v22.x which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because the Keycloak major version upgrade. Ensure to back-up Keycloak database before the upgrade. +Keycloak upgraded from v19.x to v22.x, which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because of the Keycloak major version upgrade. Ensure you back up the Keycloak database before the upgrade. -It is worth mentioning that the Keycloak PostgreSQL chart shows some warnings which are irrelative and safe to ignore. That false positive issue has been reported, and it should be fixed in the next releases of the upstream PostgreSQL Helm chart. +:::note +The Keycloak PostgreSQL chart shows some warnings which are safe to ignore. That false positive issue has been reported, and it should be fixed in the next releases of the upstream PostgreSQL Helm chart. +::: ``` coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.egressRules.customRules is a table. Ignoring non-table value ([]) @@ -261,7 +262,7 @@ false Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this: -**Option One:** Use Zeebe user ID (Recommended) +**Option One:** Use Zeebe user ID (recommended) Change `podSecurityContext.fsGroup` to point to the UID of the running user. The default user in Zeebe has the UID `1000`. That will modify the group permissions of all persistent volumes attached to that Pod. @@ -295,7 +296,7 @@ zeebe: #### Web-Modeler -The configuration format of external database has been changed in Web-Modeler from `host`, `port`, `database` to `JDBC URL`. +The configuration format of external database has been changed in Web Modeler from `host`, `port`, `database` to `JDBC URL`. The old format: