Skip to content

Commit

Permalink
install velero with restic
Browse files Browse the repository at this point in the history
Signed-off-by: kkothapelly <[email protected]>
  • Loading branch information
kkothapelly committed Jan 31, 2024
1 parent e2fb96e commit 2838685
Show file tree
Hide file tree
Showing 2 changed files with 292 additions and 0 deletions.
191 changes: 191 additions & 0 deletions src/solution-workbooks/resources/velero-with-restic/minio.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: minio
namespace: "minio"
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
secrets:
- name: minio
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: minio
namespace: "minio"
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
root-user: "cm9vdA=="
root-password: "Vk13YXJlMSE="
key.json: ""
---
# Source: minio/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minio
namespace: "minio"
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "100Gi"
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: "minio"
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
spec:
type: LoadBalancer
ports:
- name: minio-api
port: 9000
targetPort: minio-api
nodePort: null
- name: minio-console
port: 9001
targetPort: minio-console
nodePort: null
selector:
app.kubernetes.io/name: minio
app.kubernetes.io/instance: minio
---
# Source: minio/templates/standalone/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
namespace: "minio"
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
app.kubernetes.io/instance: minio
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: minio
helm.sh/chart: minio-11.9.2
app.kubernetes.io/instance: minio
app.kubernetes.io/managed-by: Helm
annotations:
checksum/credentials-secret: 565feb7739f9759ef61f641a4105eca83e15eb794ecec7cb979f6461a657c23d
spec:

serviceAccountName: minio
affinity:
podAffinity:

podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: minio
app.kubernetes.io/instance: minio
namespaces:
- "minio"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:

securityContext:
fsGroup: 1001
containers:
- name: minio
image: docker.io/bitnami/minio:2022.8.22-debian-11-r0
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MINIO_SCHEME
value: "http"
- name: MINIO_FORCE_NEW_KEYS
value: "no"
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio
key: root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio
key: root-password
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: "public"
- name: MINIO_CONSOLE_PORT_NUMBER
value: "9001"
- name: TEST
value: "true"
envFrom:
ports:
- name: minio-api
containerPort: 9000
protocol: TCP
- name: minio-console
containerPort: 9001
protocol: TCP
livenessProbe:
httpGet:
path: /minio/health/live
port: minio-api
scheme: "HTTP"
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
tcpSocket:
port: minio-api
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: minio
101 changes: 101 additions & 0 deletions src/solution-workbooks/velero-with-restic.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
Install Velero with Restic in Tanzu Kubernetes Cluster

[Velero](https://velero.io/docs), is an open source community standard tool to back up and restore TKG standalone management cluster infrastructure and workloads.

A Tanzu Kubernetes Grid subscription includes support for VMware’s tested, compatible distribution of Velero available from the Tanzu Kubernetes Grid downloads page.

To back up and restore TKG clusters, you need:

- The Velero CLI running on your local client machine.
- A storage provider with locations to save the backups to.
- A Velero server running on the clusters that you are backing up.

## Install the Velero CLI

To install the Velero CLI on your client machine, do the following:

- Go to the [Tanzu Kubernetes Grid downloads page](https://customerconnect.vmware.com/en/web/vmware/downloads/info/slug/infrastructure_operations_management/vmware_tanzu_kubernetes_grid/2_x) and log in with your VMware Customer Connect credentials.
- Under **Product Downloads**, click **Go to Downloads**.
- Select the respective Tanzu Kubernetes Grid version, scroll down to the Velero entries and download the Velero CLI .gz file your client machine OS.
- Extract the binary.
```bash
gunzip velero-linux-v1.11.1+vmware.1.gz
```
- Rename the CLI binary for your platform to `velero`, make sure that it is executable, and add it to your PATH. <br>
For linux:
```bash
mv velero /usr/local/bin/velero
chmod +x /usr/local/bin/velero
```

## Set Up a Storage Provider
Velero supports a variety of [storage providers](https://velero.io/docs/main/supported-providers), which can be either:

- An online cloud storage provider.
- An on-premises object storage service such as MinIO, for proxied or air-gapped environments.

Its recommended to dedicate a unique storage bucket to each cluster.

For this demonstration purpose, we deployed Minio which makes use of aws plugin
1. Deploy the Minio by applying the configuration file minio.yaml.
```bash
kubectl apply -f minio.yaml
```
1. Connecto Minio console and create a S3 bucket to store the backups.
1. Save the credentials to a local file(`minio.creds`) to pass to the `--secret-file` option of velero install, for example:
```bash
[default]
aws_access_key_id = root
aws_secret_access_key = VMware1!
```

## Deploy Velero Server to Workload Clusters

To deploy the Velero Server to a workload cluster, you run the `velero install` command. This command creates a namespace called `velero` on the cluster, and places a deployment named `velero` in it.

To install Velero, run velero install with the following options:
- `--provider $PROVIDER`: For example, aws
- `--plugins`: projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws:v1.7.1_vmware.1
- `--use-volume-snapshots false` to install velero with restic
- `--secret-file $file-name` for passing the S3 credentials
- `--bucket $BUCKET`: The name of your S3 bucket
- `--backup-location-config region=$REGION`: The AWS region the bucket is in


Run the below command to Install Velero with Restic:
```bash
velero install --plugins projects.registry.vmware.com/tkg/velero/velero-plugin-for-aws:v1.7.1_vmware.1 \
--provider aws \
--bucket maria-db-02 \
--use-volume-snapshots false \
--use-node-agent \
--secret-file /root/minio/minio.creds \
--default-volumes-to-fs-backup \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://172.30.40.47:9000
```

For an airgapped environments, ensure that required images are pulled from below locations and pushed to your Local Repo
```bash
docker pull projects.registry.vmware.com/tkg/velero-plugin-for-aws:v1.7.1_vmware.1
docker pull projects.registry.vmware.com/tkg/velero:v1.11.1_vmware.1
```

Run the below command to Install Velero with Restic in airgapped environments:
```bash
velero install --image <local-image-repo-fqdn>/vmware-tanzu/velero:v1.11.1_vmware.1 \
--plugins <local-image-repo-fqdn>/vmware-tanzu/velero-plugin-for-aws:v1.7.1_vmware.1 \
--provider aws \
--bucket maria-db-02 \
--use-volume-snapshots false \
--use-node-agent \
--secret-file /root/minio/minio.creds \
--default-volumes-to-fs-backup \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://172.30.40.47:9000
```


## Conclusion:

This document covers the steps to install Velero with restic on Tanzu Kubernetes Grid clusters. Once you install Velero, you can use it to backup and restore your Kubernetes workloads.

0 comments on commit 2838685

Please sign in to comment.