This document provides guides on how to upgrade provisioner version or update configuration in Kubernetes cluster.
At first, you must take a look at change logs between current version and the version you are upgrading to. If there are some ACTION REQUIRED items, you must follow them in upgrading.
Then, you can update the image tag and deploy the provisioner again.
WARNING Not all configs cannot be changed when PVs already been provisioned in production. Please fully understand all fields in provisioner configuration and limitations in here. We're trying to remove limitations as possible as we can.
useNodeNameOnly
will change provisioner name which is set in PV annotations
pv.kubernetes.io/provisioned-by
. The key is used to select PVs the
provisioner managed. So if you changed this, you need to update the value of
current PVs to the new provisioner name.
The provisioner name is
local-volume-provisioner-<node-name>
ifuseNodeNameOnly
is truelocal-volume-provisioner-<node-name>-<node-uid>
ifuseNodeNameOnly
is false
In an existing configuration of storage class, only a few fields can be changed.
mountDir
can be change if you changed mount path in provisioner pod spec.blockCleanerCommand
is safe to change.
However, it's safe to add a new storage class.
Currently, the provisioner does not reload configuration automatically. When you finish updating the ConfigMap, you must delete the pods of provisioner DaemonSet to take effect.
Note that if you add new discovery directory in provisioner configuration, you must update provisioner pod template spec too. This is not necessary if you deploy provisioner with our helm chart.