Skip to content

Commit

Permalink
es8 data retention guide
Browse files Browse the repository at this point in the history
  • Loading branch information
hamza-m-masood committed Oct 9, 2023
1 parent 697ad72 commit ae3f623
Showing 1 changed file with 64 additions and 3 deletions.
67 changes: 64 additions & 3 deletions docs/self-managed/platform-deployment/helm-kubernetes/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,15 +133,15 @@ zeebe:
runAsUser: 0
```

#### Elasticsearch 8
#### Elasticsearch 8 Upgrade Guide

Follow the official [upgrade guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.10/setup-upgrade.html) for Elasticsearch and ensure you are not using any deprecated values when upgrading.

##### Default values.yaml
##### 1. Default values.yaml

If you are using our default `values.yaml`, no change is required. Follow the upgrade steps as usual with the updated default `values.yaml`.

##### Custom values.yaml
##### 2. Custom values.yaml

If you have a custom `values.yaml`, change the image repository and tag:

Expand Down Expand Up @@ -175,6 +175,67 @@ In the global section, you can modify the host to show to release-name as well:
host: "{{ .Release.Name }}-elasticsearch"
```

#### Elasticsearch 8 Data Retention Strategy

You may have noticed that new volumes have been created for Elasticsearch after upgrading to ES8. Your previous data still exists but is not currently being utilized. The following are various approaches you can use in order to utilize your previous data once again:

##### First Option: CSI Volume Cloning
This method will take advantage of the CSI Volume Cloning functionality from the CSI driver.

Prerequisites:
1. Your kubernetes version needs to be greater than 1.20
2. The CSI driver must be present on your cluster

Clones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.

Here is an example yaml file for cloning the elasticsearch PVC:

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: integration
app.kubernetes.io/name: elasticsearch
name: data-integration-elasticsearch-master-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
dataSource:
name: elasticsearch-master-elasticsearch-master-0
kind: PersistentVolumeClaim
```

Before applying this manifest, please make sure to scale the elasticsearch replicas to 0. Also,
make sure that the `dataSource.name` matches the pvc that you would like to clone.

Reference: https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/

##### Second Option: Manual Approach

With this approach, the following steps must be followed **before** the installation:

1. Take note of the PV name and ID for both elasticsearch master PVs
2. Change the reclaim policy of the Elasticsearch PVs to `Retain`.
You can run the following command to do so:
```
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```
3. Within both Elasticsearch master PVs, edit the `claimRef` to include the name of the new PVCs that will appear after the upgrade. For example:
```
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-<helm release name>-elasticsearch-master-0
namespace: <namespace>
```
4. With the above steps completed, you can upgrade as normal. The newly generated PVCs should bind with the existing PVs
5. After a successful upgrade, you can now delete the old PVCs that are in a `Lost` state.

### v8.2.9

#### Optimize
Expand Down

0 comments on commit ae3f623

Please sign in to comment.