As in Docker, whenever the Pod is deleted all the data generated by the Pod gets deleted. To overcome this, Kubernetes has a concept of Volumes. WE can attach an Volume to the pod so as all the data generated persists even after the pod get deleted.
Volumes in Kubernetes is a resource provided by the Kube API Server and we can attach the Volume by creating an manifest of a POD definition file.
piVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /opt Its the Mounted Volume from /data path
name: test-volume
volumes: # The Volume is defined in the spec field.
- name: nginx-volume
hostPath:
path: /data # The Path from the Host
type: Directory # TYpe of Volume for a Single Node Cluster
# But not recommended for multi node cluster.
# For such cases, its best to use other storage providers
# For third party Storage solutions instead of hostPath we can use such block.
awsElasticBlockStore:
volumeID: "<volume id>"
fsType: ext4
A PersistentVolume is a chunk of storage provided inside the cluster, which can be provided by the administrator or by can be deployed dynamically by employing an Storage Class. Its a Cluster scoped Resource.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity: # PV will have a specific storage capacity
storage: 5Gi
volumeMode: Filesystem #PersistentVolumes supports **Filesystem** and **Block**.
accessModes: # Four types of Modes: ReadWriteOnce, ReadWriteMany, ReadOnlyMany
- ReadWriteOnce # and ReadWriteOncePod
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
ReclaimPolicy:
- Retain -- manual reclamation
- Recycle -- basic scrub (
rm -rf /thevolume/*
) - Delete -- associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
Provisioning Volumes:
There are two ways PVs may be provisioned: statically or dynamically.
Static Provisioning:
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users.
Dynamic Provisioning:
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a Storage Class and the administrator must have created and configured that class for dynamic provisioning to occur.
To declare a binding between the Persistent Volume created earlier, we need o create a Claim for thet PV. This is achieved using a Resource known as Persistent Volume Claim. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef
field, then the PersistentVolume and PersistentVolumeClaim will be bound.
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: ""
claimRef:
name: foo-pvc
namespace: foo
To guarantee any binding privileges to the PersistentVolume. If other
PersistentVolumeClaims could use the PV that you specify, we first need
to reserve that storage volume by specifying the relevant PersistentVolumeClaim in the claimRef
field of the PV so that other PVCs can not bind to it.
PersistentVolume types can also be implemented as plugins. Check the Kubernetes Link to see the full list of plugins supperted by Kubernetes.