Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a Container Storage Interface (CSI) volume.
The Secrets Store CSI driver secrets-store.csi.k8s.io
allows Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted into the container's file system.
Join us to help define the direction and implementation of this project!
- Join the #csi-secrets-store channel on Kubernetes Slack.
- Use GitHub Issues to file bugs, request features, or ask questions asynchronously.
- Mounts secrets/keys/certs to pod using a CSI volume
- Supports CSI Inline volume (Kubernetes version v1.15+)
- Supports mounting multiple secrets store objects as a single volume
- Supports pod identity to restrict access with specific identities (Azure provider only)
- Supports multiple secrets stores as providers. Multiple providers can run in the same cluster simultaneously.
- Supports pod portability with the SecretProviderClass CRD
- Supports windows containers (Kubernetes version v1.18+)
- Supports sync with Kubernetes Secrets (Secrets Store CSI Driver v0.0.10+)
- How It Works
- Demo
- Usage
- Providers
- Azure Key Vault Provider - Supports Linux and Windows
- HashiCorp Vault Provider - Supports Linux
- Adding a New Provider via the Provider Interface
- Testing
- Troubleshooting
- Code of conduct
The diagram below illustrates how Secrets Store CSI Volume works.
Recommended Kubernetes version: v1.16.0+
NOTE: The CSI Inline Volume feature was introduced in Kubernetes v1.15.x. Version 1.15.x will require the
CSIInlineVolume
feature gate to be updated in the cluster. Version 1.16+ does not require any feature gate.
For v1.15.x, update CSI Inline Volume feature gate
The CSI Inline Volume feature was introduced in Kubernetes v1.15.x. We need to make the following updates to include the CSIInlineVolume
feature gate:
- Update the API Server manifest to append the following feature gate:
--feature-gates=CSIInlineVolume=true
- Update Kubelet manifest on each node to append the
CSIInlineVolume
feature gate:
--feature-gates=CSIInlineVolume=true
Using Helm Chart
Follow the guide to install driver using Helm
[ALTERNATIVE DEPLOYMENT OPTION] Using Deployment Yamls
kubectl apply -f deploy/rbac-secretproviderclass.yaml # update the namespace of the secrets-store-csi-driver ServiceAccount
kubectl apply -f deploy/csidriver.yaml
kubectl apply -f deploy/secrets-store.csi.x-k8s.io_secretproviderclasses.yaml
kubectl apply -f deploy/secrets-store-csi-driver.yaml --namespace $NAMESPACE
# [OPTIONAL] For kubernetes version < 1.16 running `kubectl apply -f deploy/csidriver.yaml` will fail. To install the driver run
kubectl apply -f deploy/csidriver-1.15.yaml
# [OPTIONAL] To deploy driver on windows nodes
kubectl apply -f deploy/secrets-store-csi-driver-windows.yaml --namespace $NAMESPACE
To validate the installer is running as expected, run the following commands:
kubectl get po --namespace $NAMESPACE
You should see the Secrets Store CSI driver pods running on each agent node:
csi-secrets-store-qp9r8 2/2 Running 0 4m
csi-secrets-store-zrjt2 2/2 Running 0 4m
You should see the following CRDs deployed:
kubectl get crd
NAME
secretproviderclasses.secrets-store.csi.x-k8s.io
Select a provider from the following list, then follow the installation steps for the provider:
To use the Secrets Store CSI driver, create a SecretProviderClass
custom resource to provide driver configurations and provider-specific parameters to the CSI driver.
A SecretProviderClass
custom resource should have the following components:
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-provider
spec:
provider: vault # accepted provider options: azure or vault
parameters: # provider-specific parameters
Here is a sample SecretProviderClass
custom resource
To ensure your application is using the Secrets Store CSI driver, update your deployment yaml to use the secrets-store.csi.k8s.io
driver and reference the SecretProviderClass
resource created in the previous step.
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "my-provider"
Here is a sample deployment yaml using the Secrets Store CSI driver.
On pod start and restart, the driver will call the provider binary to retrieve the secret content from the external Secrets Store you have specified in the SecretProviderClass
custom resource. Then the content will be mounted to the container's file system.
To validate, once the pod is started, you should see the new mounted content at the volume path specified in your deployment yaml.
kubectl exec -it nginx-secrets-store-inline ls /mnt/secrets-store/
foo
In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. Use the optional secretObjects
field to define the desired state of the synced Kubernetes secret objects.
NOTE: If the provider supports object alias for the mounted file, then make sure the
objectName
insecretObjects
matches the name of the mounted content. This could be the object name or the object alias.
A SecretProviderClass
custom resource should have the following components:
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: my-provider
spec:
provider: vault # accepted provider options: azure or vault
secretObjects: # [OPTIONAL] SecretObject defines the desired state of synced K8s secret objects
- data:
- key: username # data field to populate
objectName: foo1 # name of the mounted content to sync. this could be the object name or the object alias
secretName: foosecret # name of the Kubernetes Secret object
type: Opaque # type of the Kubernetes Secret object e.g. Opaque, kubernetes.io/tls
NOTE: Here is the list of supported Kubernetes Secret types:
Opaque
,kubernetes.io/basic-auth
,bootstrap.kubernetes.io/token
,kubernetes.io/dockerconfigjson
,kubernetes.io/dockercfg
,kubernetes.io/ssh-auth
,kubernetes.io/service-account-token
,kubernetes.io/tls
.
Here is a sample SecretProviderClass
custom resource that syncs Kubernetes secrets.
Once the secret is created, you may wish to set an ENV VAR in your deployment to reference the new Kubernetes secret.
spec:
containers:
- image: nginx
name: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: foosecret
key: username
Here is a sample deployment yaml that creates an ENV VAR from the synced Kubernetes secret.
This project features a pluggable provider interface developers can implement that defines the actions of the Secrets Store CSI driver. This enables retrieval of sensitive objects stored in an enterprise-grade external secrets store into Kubernetes while continue to manage these objects outside of Kubernetes.
Here is a list of criteria for supported provider:
- Code audit of the provider implementation to ensure it adheres to the required provider-driver interface, which includes:
- implementation of provider command args https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/master/pkg/secrets-store/nodeserver.go#L223-L236
- provider binary naming convention and semver convention
- provider binary deployment volume path
- provider logs are written to stdout and stderr so they can be part of the driver logs
- Add provider to the e2e test suite to demonstrate it functions as expected https://github.com/kubernetes-sigs/secrets-store-csi-driver/tree/master/test/bats Please use existing providers e2e tests as a reference.
- If any update is made by a provider (not limited to security updates), the provider is expected to update the provider's e2e test in this repo
Failure to adhere to the Criteria for Supported Providers will result in the removal of the provider from the supported list and subject to another review before it can be added back to the list of supported providers.
When a provider's e2e tests are consistently failing with the latest version of the driver, the driver maintainers will coordinate with the provider maintainers to provide a fix. If the test failures are not resolved within 4 weeks, then the provider will be removed from the list of supported providers.
Run unit tests locally with make test
.
End-to-end tests automatically runs on Prow when a PR is submitted. If you want to run using a local or remote Kubernetes cluster, make sure to have kubectl
, helm
and bats
set up in your local environment and then run make e2e-azure
or make e2e-vault
with custom images.
Job config for test jobs run for each PR in prow can be found here
- To troubleshoot issues with the csi driver, you can look at logs from the
secrets-store
container of the csi driver pod running on the same node as your application pod:kubectl get pod -o wide # find the secrets store csi driver pod running on the same node as your application pod kubectl logs csi-secrets-store-secrets-store-csi-driver-7x44t secrets-store
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.