From fad7dc2a9178eb5725c25e6458a26d4f1f5ff597 Mon Sep 17 00:00:00 2001 From: kkothapelly Date: Mon, 12 Feb 2024 19:37:21 +0530 Subject: [PATCH 1/2] Publishing keydb/maria/rabbitmq backup Signed-off-by: kkothapelly --- src/solution-workbooks/keydb-backup.md | 143 +++++++++++ src/solution-workbooks/mariadb-backup.md | 277 ++++++++++++++++++++++ src/solution-workbooks/rabbitmq-backup.md | 177 ++++++++++++++ 3 files changed, 597 insertions(+) create mode 100644 src/solution-workbooks/keydb-backup.md create mode 100644 src/solution-workbooks/mariadb-backup.md create mode 100644 src/solution-workbooks/rabbitmq-backup.md diff --git a/src/solution-workbooks/keydb-backup.md b/src/solution-workbooks/keydb-backup.md new file mode 100644 index 00000000..0983e6cf --- /dev/null +++ b/src/solution-workbooks/keydb-backup.md @@ -0,0 +1,143 @@ +# Backing Up and Restoring the KeyDB deployments on Tanzu Kubernetes Grid + +KeyDB is a open source in-memory data store with a number of advanced features for high availability and data optimization. Its a high performance fork of Redis with a focus on multi-threading, memory efficiency, and high throughput. KeyDB maintains full compatibility with the Redis protocol, modules, and scripts. All 16 default logical databases on each KeyDB instance can be used and standard KeyDB/Redis protocol is supported without any other limitations. + +For this demonstration, we leveraged on Tanzu Kubernetes Grid 2.3.0 (Kubernetes 1.26.x) to create a well-configured and highly available infrastructure for our KeyDB deployment. The Tanzu Infrastructure played an important role in optimizing the deployment and management of KeyDB, adding further value to the backup and restore capabilities. + +Once you have your KeyDB cluster deployed on Kubernetes, it is essential to put a data backup strategy in place to protect the data within KeyDB. This backup strategy is needed for many operational scenarios, including disaster recovery planning, off-site data analysis, application load testing, and so on. + +This document explains the process to back up and restore a KeyDB Service on Tanzu Kubernetes Grid clusters using Velero, an open-source Kubernetes backup/restore tool. + +## Prerequisites +- Configure two separate TKG Workload: + - A source cluster + - A destination cluster +- Install the `kubectl` and `Helm v3` CLI on the client machine. +- Install the Velero CLI on the client machine. +- Configure a S3 compatible storage for storing the backups. +- Install Velero on the source and destination cluster. +- For more information about Velero installation and best practices, see [Installing Velero in Tanzu Kubernetes Cluster +](./velero-with-restic.md). + +## Deploy KeyDB Uusing Helm + +For this demonstration purpose, we use Helm deploy KeyDB using Helm on source cluster, and upload data to it. + +1. Create a new namespace `keydb` on source cluster for deploying the KeyDB Service. + +1. Deploy KeyDB using Helm by running the following command: + + ```bash + helm repo add enapter https://enapter.github.io/charts/ + helm install keydb enapter/keydb -n keydb + + ``` + +1. Deploy a Redis client pod that you can use as a client to connect to KeyDB database: + + ```bash + kubectl run --namespace keydb keydb-cluster-client --rm --tty -i --restart='Never' --image docker.io/bitnami/redis-cluster:7.2.3-debian-11-r1 -- bash + ``` +1. Connect to KeyDB using the RedisCLI: + + ```bash + redis-cli -c -h keydb + ``` +1. Upload data for testing purposes: + + ```bash + set foo 100 + set bar 200 + set foo1 300 + set bar1 400 + ``` +1. Validate that the data has been uploaded: + + ```bash + get foo + get bar + get foo1 + get bar1 + ``` + +## Back up the KeyDB Deployment on the Source Cluster + +In this section, we'll use Velero to back up the KeyDB deployment including namespace. + +1. Before taking the backup, run the `BGSAVE` command for creating backups in KeyDB. The `BGSAVE` command creates a snapshot of the in-memory dataset and writes it to the disk.
+ + > **Note** If you have specific requirements for data consistency or if you want to create a point-in-time snapshot, you can consider using the `BGSAVE` command in KeyDB to trigger a background save operation before initiating the Velero backup. However, in many cases, the combination of Velero and Restic is sufficient for routine backups without the need to stop the KeyDB database. + + ```bash + kubectl run --namespace keydb keydb-cluster-client --rm --tty -i --restart='Never' --image docker.io/bitnami/redis-cluster:7.2.3-debian-11-r1 -- bash + + redis-cli -c -h keydb + + BGSAVE + + exit + ``` +1. Backup the KeyDB database using velero: + + ```bash + # velero backup create keydb-backup-01a --include-namespaces keydb + Backup request "keydb-backup-01a" submitted successfully. + Run `velero backup describe keydb-backup-01a` or `velero backup logs keydb-backup-01a` for more details. + ``` +1. Validate the backup status by running the following command: + + ```bash + # velero backup describe keydb-backup-01a --details + ``` + +## Restore the KeyDB Deployment on the Destination Cluster + +We'll now restore the KeyDB deployment on the destination cluster. + +1. To restore the backup, run the following command: + + ```bash + # velero restore create --from-backup keydb-backup-01a + Restore request "keydb-backup-01a-20231221082628" submitted successfully. + Run `velero restore describe keydb-backup-01a-20231221082628` or `velero restore logs keydb-backup-01a-20231221082628` for more details. + + ``` +1. Ensure that the PVCs are recovered, and the status of the active PVC is bound: + + ```bash + # kubectl get pvc -n keydb + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + keydb-data-keydb-0 Bound pvc-ccffc288-9e83-4214-8c17-ee291db5d819 1Gi RWO default 89s + keydb-data-keydb-1 Bound pvc-d3618a90-d650-4346-ae40-8d54b800e03a 1Gi RWO default 89s + keydb-data-keydb-2 Bound pvc-da9107af-f184-4f4c-9e85-744b9634b9ef 1Gi RWO default 89s + ``` +1. Ensure that the pods are up and running: + + ```bash + # kubectl get pods -n keydb + NAME READY STATUS RESTARTS AGE + keydb-0 1/1 Running 0 79s + keydb-1 1/1 Running 0 79s + keydb-2 1/1 Running 0 79s + ``` +1. Connect to the databse and ensure that data is intact: + + ```bash + # kubectl run --namespace keydb keydb-cluster-client --rm --tty -i --restart='Never' --image docker.io/bitnami/redis-cluster:7.2.3-debian-11-r1 -- bash + If you don't see a command prompt, try pressing enter. + + I have no name!@keydb-cluster-client:/$ redis-cli -c -h keydb + + keydb:6379> get foo + "100" + keydb:6379> get boo + "200" + keydb:6379> get foo1 + "300" + keydb:6379> get boo1 + "400" + ``` + +## Conclusion + +Regular backups of your MariaDB deployments are crucial for ensuring data safety and minimizing downtime. By using the procedures explained in this document, you can establish a reliable backup routine and test restoration practices to guarantee a swift and successful recovery when needed. \ No newline at end of file diff --git a/src/solution-workbooks/mariadb-backup.md b/src/solution-workbooks/mariadb-backup.md new file mode 100644 index 00000000..fac24d10 --- /dev/null +++ b/src/solution-workbooks/mariadb-backup.md @@ -0,0 +1,277 @@ + +# Backing Up and Restoring the MariaDB Deployments on Tanzu Kubernetes Grid + +MariaDB Galera Cluster makes it easy to create a high-availability database cluster with synchronous replication while retaining all the familiar MariaDB clients and tools. + +For this demonstration, we leveraged on Tanzu Kubernetes Grid 2.3.0 (Kubernetes 1.26.x) to create a well-configured and highly available infrastructure for our MariaDB deployment. The Tanzu Infrastructure played an important role in optimizing the deployment, and managing MariaDB, adding further value to the backup and restore capabilities. + +You can deploy a scalable MariaDB cluster on Tanzu Kubernetes Grid cluster using a [Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/mariadb-galera). Once you have a MariaDB Galera Cluster deployed, you must put a data backup/restore strategy along with ongoing maintenance and disaster recovery. The data backup/restore strategy is required for many operational scenarios, such as, disaster recovery planning, off-site data analysis, or application load testing, and so on. + +This document explains the process to back up and restore a MariaDB deployment on Tanzu Kubernetes Grid clusters using Velero, an open-source Kubernetes backup/restore tool. + +## Prerequisites +- Configure two separate TKG Workload: + - A source cluster + - A destination cluster +- Install the `kubectl` and `Helm v3` CLI on the client machine. +- Install the Velero CLI on the client machine. +- Configure a S3 compatible storage for storing the backups. +- Install Velero on the source and destination cluster. +- For more information about Velero installation and best practices, see [Installing Velero in Tanzu Kubernetes Cluster +](./velero-with-restic.md). + +## Deploy MariaDB Using Helm + +For this demonstration purpose, we use Helm to deploy MariaDB using Helm on source cluster, and upload data to it. + +1. Create a new namespace `maria-db1` on source cluster for deploying the MariaDB Galera cluster. +1. Deploy MariaDB Galera using Helm by running the following command: + + ```bash + helm install galera oci://registry-1.docker.io/bitnamicharts/mariadb-galera \ + --namespace maria-db1 \ + --set rootUser.password=VMware1! \ + --set galera.mariabackup.password=VMware1! + ``` +1. Deploy Galera Client to connect to the database.
Then, create a new database and upload data: + + ```bash + kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --namespace maria-db1 --image docker.io/bitnami/mariadb-galera:11.1.3-debian-11-r0 --command \ + -- mysql -h galera-mariadb-galera -P 3306 -uroot -p$(kubectl get secret --namespace maria-db1 galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 -d) my_database + + + CREATE DATABASE mydb; + USE mydb; + CREATE TABLE accounts (name VARCHAR(255) NOT NULL, total INT NOT NULL); + INSERT INTO accounts VALUES ('user1', '1'), ('user2', '2'), ('user3', '3'), ('user4', '4'), ('user5', '5'), ('user6', '6'), ('user7', '7'), ('user8', '8'), ('user9', '9'); + exit + ``` +1. Validate the uploaded data by running the following command: + + ```bash + > SELECT * FROM mydb.accounts; + +-------+-------+ + | name | total | + +-------+-------+ + | user1 | 1 | + | user2 | 2 | + | user3 | 3 | + | user4 | 4 | + | user5 | 5 | + | user6 | 6 | + | user7 | 7 | + | user8 | 8 | + | user9 | 9 | + +-------+-------+ + 9 rows in set (0.001 sec) + ``` + +## Back up the MariaDB Deployment on the Source Cluster + +In this section, we'll use Velero to back up the MariaDB deployment including namespace. This approach requires scaling the cluster down to a single node to perform the backup. + +1. Scale down the cluster to single node: + + ```bash + # kubectl scale statefulset --replicas=1 galera-mariadb-galera -n maria-db1 + statefulset.apps/galera-mariadb-galera scaled + + ## Obtain the name of the running pod. Make a note of the node number which is suffixed to the name. For example, if the running pod is galera-mariadb-galera-0, the node number is 0 + + # kubectl get pods -n maria-db1 + NAME READY STATUS RESTARTS AGE + galera-mariadb-galera-0 1/1 Running 0 9m29s + ``` + +1. Before backing up the data, aquire global read lock on all tables in the database. This lock prevents any further write (insert, update, delete) operations on the tables. However, it allows read operations to continue. + + ```bash + kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --namespace maria-db1 --image docker.io/bitnami/mariadb-galera:11.1.3-debian-11-r0 --command \ + -- mysql -h galera-mariadb-galera -P 3306 -uroot -p$(kubectl get secret --namespace maria-db1 galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 -d) my_database + + ##Command to aquire global read lock + USE mydb; + FLUSH TABLES WITH READ LOCK; + ``` + +1. Now, create backup on source cluster using velero: + + ```bash + # velero backup create galera-backup-05a --include-namespaces maria-db1 + Backup request "galera-backup-05a" submitted successfully. + Run `velero backup describe galera-backup-05a` or `velero backup logs galera-backup-05a` for more details. + ``` + +1. To ensure that the backup of all data including the active PVC is successful, run the following command: + + ```bash + # velero backup describe galera-backup-05a + Name: galera-backup-05a + Namespace: velero + Labels: velero.io/storage-location=default + Annotations: velero.io/resource-timeout=10m0s + velero.io/source-cluster-k8s-gitversion=v1.21.2+vmware.1-fips.1 + velero.io/source-cluster-k8s-major-version=1 + velero.io/source-cluster-k8s-minor-version=21 + + Phase: Completed + + + Namespaces: + Included: maria-db1 + Excluded: + + Resources: + Included: * + Excluded: + Cluster-scoped: auto + + Label selector: + + Or label selector: + + Storage Location: default + + Velero-Native Snapshot PVs: auto + Snapshot Move Data: false + Data Mover: velero + + TTL: 720h0m0s + + CSISnapshotTimeout: 10m0s + ItemOperationTimeout: 4h0m0s + + Hooks: + + Backup Format Version: 1.1.0 + + Started: 2023-12-20 07:20:41 +0000 UTC + Completed: 2023-12-20 07:21:46 +0000 UTC + + Expiration: 2024-01-19 07:20:41 +0000 UTC + + Total items to be backed up: 92 + Items backed up: 92 + + Velero-Native Snapshots: + + kopia Backups (specify --details for more information): + Completed: 2 + ``` + +1. After backing up the data, you might release the global read lock on all tables in the database. You must also scale the cluster back to its initial state: + + ```bash + kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --namespace maria-db1 --image docker.io/bitnami/mariadb-galera:11.1.3-debian-11-r0 --command \ + -- mysql -h galera-mariadb-galera -P 3306 -uroot -p$(kubectl get secret --namespace maria-db1 galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 -d) my_database + + ##Command to release global read lock + USE mydb; + UNLOCK TABLES; + + ##Scale up the cluster to only a initial state: + kubectl scale statefulset --replicas=3 galera-mariadb-galera -n maria-db1 + ``` + +## Restore the MariaDB Deployment on the Destination Cluster + +Now, we'll restore the MariaDB deployment on the destination cluster. + +1. To restore the backup, run the following command: + + ```bash + # velero restore create --from-backup galera-backup-05a + Restore request "galera-backup-05a-20231220075647" submitted successfully. + Run `velero restore describe galera-backup-05a-20231220075647` or `velero restore logs galera-backup-05a-20231220075647` for more details. + ``` +1. Ensure that the PVCs are recovered, and the status of the active PVC is bound: + + ```bash + # kubectl get pvc -n maria-db1 + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + data-galera-mariadb-galera-0 Bound pvc-a1fa9efa-ab11-40c5-b361-70aabbaebfc7 8Gi RWO default 114s + data-galera-mariadb-galera-1 Pending default 114s + data-galera-mariadb-galera-2 Pending default 114s + ``` +1. Ensure that the pod is up and running: + + ```bash + # kubectl get pods -n maria-db1 + NAME READY STATUS RESTARTS AGE + galera-mariadb-galera-0 1/1 Running 0 88s + ``` +1. To connect to the database and to ensure that the data is intact, run the following command: + + ```bash + kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --namespace maria-db1 --image docker.io/bitnami/mariadb-galera:11.1.3-debian-11-r0 --command \ + -- mysql -h galera-mariadb-galera -P 3306 -uroot -p$(kubectl get secret --namespace maria-db1 galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 -d) my_database + + MariaDB [my_database]> SELECT * FROM mydb.accounts; + +-------+-------+ + | name | total | + +-------+-------+ + | user1 | 1 | + | user2 | 2 | + | user3 | 3 | + | user4 | 4 | + | user5 | 5 | + | user6 | 6 | + | user7 | 7 | + | user8 | 8 | + | user9 | 9 | + +-------+-------+ + 9 rows in set (0.001 sec) + ``` +1. Delete the unbound PVC, and scale up the cluster: + + ```bash + ##Delete unbound PVC: + $ kubectl delete pvc data-galera-mariadb-galera-1 data-galera-mariadb-galera-2 -n maria-db1 + persistentvolumeclaim "data-galera-mariadb-galera-1" deleted + persistentvolumeclaim "data-galera-mariadb-galera-2" deleted + + ##Scale the Maria cluster back to its original size: + # kubectl scale statefulset --replicas=3 galera-mariadb-galera -n maria-db1 + ``` +1. Ensure that the PVCs are recreated, and required pods are up and running: + + ```bash + # kubectl get pods -n maria-db1 + NAME READY STATUS RESTARTS AGE + galera-mariadb-galera-0 1/1 Running 0 26m + galera-mariadb-galera-1 1/1 Running 0 100s + galera-mariadb-galera-2 1/1 Running 0 51s + + + # kubectl get pvc -n maria-db1 + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + data-galera-mariadb-galera-0 Bound pvc-23162ccf-c61b-41c7-8f49-c08c342222fc 8Gi RWO default 27m + data-galera-mariadb-galera-1 Bound pvc-fc18ac4d-f0b0-4da2-9ae2-f3759ed72cc1 8Gi RWO default 2m3s + data-galera-mariadb-galera-2 Bound pvc-deec1841-9c83-4b9f-89cf-d8a02797d239 8Gi RWO default 74s + ``` +1. Connect to MariaDB and ensure the data integrity: + + ```bash + kubectl run galera-mariadb-galera-client --rm --tty -i --restart='Never' --namespace maria-db1 --image docker.io/bitnami/mariadb-galera:11.1.3-debian-11-r0 --command \ + -- mysql -h galera-mariadb-galera -P 3306 -uroot -p$(kubectl get secret --namespace maria-db1 galera-mariadb-galera -o jsonpath="{.data.mariadb-root-password}" | base64 -d) my_database + + MariaDB [my_database]> SELECT * FROM mydb.accounts; + +-------+-------+ + | name | total | + +-------+-------+ + | user1 | 1 | + | user2 | 2 | + | user3 | 3 | + | user4 | 4 | + | user5 | 5 | + | user6 | 6 | + | user7 | 7 | + | user8 | 8 | + | user9 | 9 | + +-------+-------+ + 9 rows in set (0.001 sec) + ``` + +## Conclusion + +Regular backups of your MariaDB deployments are crucial for ensuring data safety and minimizing downtime. By using the procedures explained in this document, you can establish a reliable backup routine and test restoration practices to guarantee a swift and successful recovery when needed. \ No newline at end of file diff --git a/src/solution-workbooks/rabbitmq-backup.md b/src/solution-workbooks/rabbitmq-backup.md new file mode 100644 index 00000000..4e8f51f6 --- /dev/null +++ b/src/solution-workbooks/rabbitmq-backup.md @@ -0,0 +1,177 @@ +# Backing Up and Restoring the Rabbitmq Deployments on Tanzu Kubernetes Grid + +[RabbitMQ](https://www.rabbitmq.com/) is a highly-scalable and reliable open-source message broking system. It supports a number of different messaging protocols, message qeueing, and plug-ins for additional customization. + +For this demonstration, we leveraged on Tanzu Kubernetes Grid 2.3.0 (Kubernetes 1.26.x) to create a well-configured and highly available infrastructure for the Rabbitmq deployment. The Tanzu Infrastructure played an important role in optimizing the deployment and management of Rabbitmq, adding further value to the backup and restore capabilities. + +You can deploy a scalable RabbitMQ cluster on Tanzu Kubernetes Grid cluster using a [Helm chart](https://github.com/bitnami/charts/tree/main/bitnami/rabbitmq). Once the RabbitMQ cluster is operational, backing up the data held within it becomes an important and ongoing administrative task. A data backup/restore strategy is required not only for data security and disaster recovery planning, but also for other tasks, such as, off-site data analysis or application load testing. + + +This guide explains the process to back up and restore a RabbitMQ deployment on Tanzu Kubernetes Grid clusters using Velero, an open-source Kubernetes backup/restore tool. + +## Prerequisites +- Configure two separate TKG Workload: + - A source cluster + - A destination cluster +- Install the `kubectl` and `Helm v3` CLI on the client machine. +- Install the Velero CLI on the client machine. +- Configure a S3 compatible storage for storing the backups. +- Install Velero on the source and destination cluster. +- For more information about Velero installation and best practices, see [Installing Velero in Tanzu Kubernetes Cluster +](./velero-with-restic.md). + +## Deploy Rabbitmq Using Helm + +For this demonstration purpose, we use Helm to deploy Rabbitmq using Helm on source cluster, and upload some data to it: + +1. Deploy Rabbitmq by running the below command: + + ```bash + # helm install rabbitmq oci://registry-1.docker.io/bitnamicharts/rabbitmq \ + --namespace rabbitmq \ + --set auth.password=VMware1! \ + --set service.type=LoadBalancer \ + --set plugins=rabbitmq_management \ + --set replicaCount=3 + ``` +1. Validate the the Rabbitmq deployment is succesful: + + ```bash + # kubectl get all -n rabbitmq + NAME READY STATUS RESTARTS AGE + pod/rabbitmq-0 1/1 Running 0 1d + pod/rabbitmq-1 1/1 Running 0 1d + pod/rabbitmq-2 1/1 Running 0 1d + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/rabbitmq LoadBalancer 100.69.160.209 172.16.48.139 5672:32580/TCP,4369:31225/TCP,25672:31930/TCP,15672:31567/TCP 1d + service/rabbitmq-headless ClusterIP None 4369/TCP,5672/TCP,25672/TCP,15672/TCP 1d + + NAME READY AGE + statefulset.apps/rabbitmq 3/3 1d + ``` +1. Download and install `Rabbitmqadmin` CLI on your local machine: + + ```bash + # export SERVICE_IP=$(kubectl get svc --namespace rabbitmq rabbitmq --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") + + # wget http://$SERVICE_IP:15672/cli/rabbitmqadmin + --2023-12-21 10:25:14-- http://172.16.48.139:15672/cli/rabbitmqadmin + Connecting to 172.16.48.139:15672... connected. + HTTP request sent, awaiting response... 200 OK + Length: 42533 (42K) [application/octet-stream] + Saving to: ‘rabbitmqadmin.2’ + + rabbitmqadmin.2 100%[======================>] 41.54K --.-KB/s in 0.001s + + 2023-12-21 10:25:14 (69.0 MB/s) - ‘rabbitmqadmin.2’ saved [42533/42533] + + # chmod +x rabbitmqadmin + # cp ./rabbitmqadmin /usr/local/bin + ``` +1. Upload data by creating Queues and messages: + + ```bash + # rabbitmqadmin -H $SERVICE_IP -u user -p $Password declare queue name=my-queue durable=true + queue declared + + # rabbitmqadmin -H $SERVICE_IP -u user -p $Password publish routing_key=my-queue payload="message 1" properties="{\"delivery_mode\":2}" + Message published + + # rabbitmqadmin -H $SERVICE_IP -u user -p $Password publish routing_key=my-queue payload="message 2" properties="{\"delivery_mode\":2}" + Message published + + # rabbitmqadmin -H $SERVICE_IP -u user -p $Password publish routing_key=my-queue payload="message 3" properties="{\"delivery_mode\":2}" + Message published + ``` +1. Validate the queue and messages by accessing the management console over the loadbalancer IP&Port. + +## Back Up the RabbitMQ Deployment on the Source Cluster + +In this section, we'll use Velero to back up the Rabbimq deployment including namespace. + +> **Note** Before backing up the data, pause the producers and consumers of messages during the backup to ensure a consistent state. This can be achieved by stopping or pausing applications that produce or consume messages. + +1. Create a backup of the Rabbitmq namespace in the source cluster: + + ```bash + # velero backup create rabbitmq-backup-01a --include-namespaces rabbitmq + Backup request "rabbitmq-backup-01a" submitted successfully. + Run `velero backup describe rabbitmq-backup-01a` or `velero backup logs rabbitmq-backup-01a` for more details. + ``` +1. To view the content of the backup, and confirm that it contains all the required resources, run the following command below: + + ```bash + # velero backup describe rabbitmq-backup-01a --details + + v1/Namespace: + - rabbitmq + v1/PersistentVolume: + - pvc-329333ce-caed-4a9a-9163-4d13f3606ea2 + - pvc-a3bd6434-a584-41b8-974f-17c5631252bf + - pvc-dd062669-b45b-41b6-8b0d-a42273744bad + v1/PersistentVolumeClaim: + - rabbitmq/data-rabbitmq-0 + - rabbitmq/data-rabbitmq-1 + - rabbitmq/data-rabbitmq-2 + v1/Pod: + - rabbitmq/rabbitmq-0 + - rabbitmq/rabbitmq-1 + - rabbitmq/rabbitmq-2 + v1/Secret: + - rabbitmq/rabbitmq + - rabbitmq/rabbitmq-config + - rabbitmq/sh.helm.release.v1.rabbitmq.v1 + v1/Service: + - rabbitmq/rabbitmq + - rabbitmq/rabbitmq-headless + v1/ServiceAccount: + - rabbitmq/default + - rabbitmq/rabbitmq + + Velero-Native Snapshots: + + kopia Backups: + Completed: + rabbitmq/rabbitmq-0: data + rabbitmq/rabbitmq-1: data + rabbitmq/rabbitmq-2: data + ``` + +## Restore the RabbitMQ Deployment on the Destination Cluster + +We'll now restore the Rabbitmq backup on the destination cluster. + +1. To restore the Rabbimq deployment, run the following command: + + ```bash + # velero restore create --from-backup rabbitmq-backup-01a + Restore request "rabbitmq-backup-01a-20231221110344" submitted successfully. + Run `velero restore describe rabbitmq-backup-01a-20231221110344` or `velero restore logs rabbitmq-backup-01a-20231221110344` for more details. + ``` +1. Validate that the Rabbitmq deployment restore is successful: + + ```bash + # kubectl get all -n rabbitmq + NAME READY STATUS RESTARTS AGE + pod/rabbitmq-0 1/1 Running 0 79s + pod/rabbitmq-1 1/1 Running 0 79s + pod/rabbitmq-2 1/1 Running 0 79s + + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/rabbitmq LoadBalancer 100.70.78.228 172.16.48.140 5672:31947/TCP,4369:32370/TCP,25672:32684/TCP,15672:31691/TCP 78s + service/rabbitmq-headless ClusterIP None 4369/TCP,5672/TCP,25672/TCP,15672/TCP 78s + + NAME READY AGE + statefulset.apps/rabbitmq 3/3 20s + root@photon [ ~/velero ]# kubectl get pvc -n rabbitmq + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + data-rabbitmq-0 Bound pvc-4a029420-2ecd-48b6-acaa-9d549dd0b3f6 8Gi RWO default 88s + data-rabbitmq-1 Bound pvc-5aa78194-1422-43fe-ad72-853c797f44ff 8Gi RWO default 88s + data-rabbitmq-2 Bound pvc-ecb59129-675e-4797-b7c7-d73152fcff5e 8Gi RWO default 88s + ``` +1. Confirm the data integrity by validating the queue and messages by accessing the management console over the loadbalancer IP&Port. + +## Conclusion + +Regular backups of your RabbitMQ deployments are crucial for ensuring data safety and minimizing downtime. By using the procedures explained in this document, you can establish a reliable backup routine and test restoration practices to guarantee a swift and successful recovery when needed. \ No newline at end of file From 9c5bb8c76d254c6e52c3fec078a30bdb2874c348 Mon Sep 17 00:00:00 2001 From: kkothapelly Date: Tue, 13 Feb 2024 12:20:12 +0530 Subject: [PATCH 2/2] updated with IX feedback Signed-off-by: kkothapelly --- src/solution-workbooks/keydb-backup.md | 4 ++-- src/solution-workbooks/mariadb-backup.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/solution-workbooks/keydb-backup.md b/src/solution-workbooks/keydb-backup.md index 0983e6cf..fe5cec09 100644 --- a/src/solution-workbooks/keydb-backup.md +++ b/src/solution-workbooks/keydb-backup.md @@ -1,6 +1,6 @@ # Backing Up and Restoring the KeyDB deployments on Tanzu Kubernetes Grid -KeyDB is a open source in-memory data store with a number of advanced features for high availability and data optimization. Its a high performance fork of Redis with a focus on multi-threading, memory efficiency, and high throughput. KeyDB maintains full compatibility with the Redis protocol, modules, and scripts. All 16 default logical databases on each KeyDB instance can be used and standard KeyDB/Redis protocol is supported without any other limitations. +KeyDB is an open source in-memory data store with a number of advanced features for high availability and data optimization. Its a high performance fork of Redis with a focus on multi-threading, memory efficiency, and high throughput. KeyDB maintains full compatibility with the Redis protocol, modules, and scripts. All 16 default logical databases on each KeyDB instance can be used and standard KeyDB/Redis protocol is supported without any other limitations. For this demonstration, we leveraged on Tanzu Kubernetes Grid 2.3.0 (Kubernetes 1.26.x) to create a well-configured and highly available infrastructure for our KeyDB deployment. The Tanzu Infrastructure played an important role in optimizing the deployment and management of KeyDB, adding further value to the backup and restore capabilities. @@ -77,7 +77,7 @@ In this section, we'll use Velero to back up the KeyDB deployment including name exit ``` -1. Backup the KeyDB database using velero: +1. Back up the KeyDB database using velero: ```bash # velero backup create keydb-backup-01a --include-namespaces keydb diff --git a/src/solution-workbooks/mariadb-backup.md b/src/solution-workbooks/mariadb-backup.md index fac24d10..808b30ad 100644 --- a/src/solution-workbooks/mariadb-backup.md +++ b/src/solution-workbooks/mariadb-backup.md @@ -22,7 +22,7 @@ This document explains the process to back up and restore a MariaDB deployment o ## Deploy MariaDB Using Helm -For this demonstration purpose, we use Helm to deploy MariaDB using Helm on source cluster, and upload data to it. +For this demonstration purpose, we'll use Helm to deploy MariaDB using Helm on source cluster, and upload data to it. 1. Create a new namespace `maria-db1` on source cluster for deploying the MariaDB Galera cluster. 1. Deploy MariaDB Galera using Helm by running the following command: