This document describes how to initialize, unseal, and interact with a deployed Vault cluster.
- Vault Commands (CLI) installed
- Before beginning, create the example Vault cluster
Initialize a new Vault cluster before performing any operations.
-
Configure port forwarding between the local machine and the first sealed Vault node:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.sealed[0]}' | xargs -0 -I {} kubectl -n default port-forward {} 8200
-
Open a new terminal.
-
Export the following environment for Vault CLI environment:
export VAULT_ADDR='https://localhost:8200' export VAULT_SKIP_VERIFY="true"
-
Verify that the Vault server is accessible using the Vault CLI:
$ vault status Error checking seal status: Error making API request. URL: GET https://localhost:8200/v1/sys/seal-status Code: 400. Errors: * server is not yet initialized
A response confirms that the Vault CLI is ready to interact with the Vault server. However, the output indicates that the Vault server is not initialized.
-
Initialize the Vault server to generate the unseal keys and the root token.
See Initializing the Vault on how to initialize a Vault cluster.
-
Configure port forwarding between the local machine and the first sealed Vault node:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.sealed[0]}' | xargs -0 -I {} kubectl -n default port-forward {} 8200
-
Open a new terminal.
-
Export the following environment for Vault CLI environment:
export VAULT_ADDR='https://localhost:8200' export VAULT_SKIP_VERIFY="true"
-
Unseal the Vault node by using the unseal keys generated from the initialized Vault cluster.
See Seal/Unseal a Vault node on how to unseal a vault node.
The first node that is unsealed in a multi-node Vault cluster will become the active node. The active node holds the leader election lock. The other unsealed nodes become standby.
-
Check the active Vault node:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.active}'
-
Configure port forwarding between the local machine and the active Vault node:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.active}' | xargs -0 -I {} kubectl -n default port-forward {} 8200
-
Open a new terminal.
-
Export the following environment for Vault CLI environment. The root token used to authenticate the Vault CLI requests is given below. Replace
<root-token>
with the root token generated during Initialization.export VAULT_ADDR='https://localhost:8200' export VAULT_SKIP_VERIFY="true" export VAULT_TOKEN=<root-token>
Consult the Vault Authentication docs for more advanced configuration.
-
Write and read an example secret:
$ vault write secret/foo value=bar $ vault read secret/foo Key Value --- ----- refresh_interval 768h0m0s value bar
Successful operations indicate that the active Vault node is serving requests.
File audit backend is the most native experience for Vault in container.
-
Set up port-forward connection to active node and Vault CLI env same as previous section:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.active}' | xargs -0 -I {} kubectl -n default port-forward {} 8200
Open another terminal and type:
export VAULT_ADDR='https://localhost:8200' export VAULT_SKIP_VERIFY="true" export VAULT_TOKEN=<root-token>
-
Enable file audit backend and write audit log to standard output:
vault audit-enable file file_path=stdout
Then use docker/Kubernetes log collector to save logs and view later.
Vault-operator creates Kubernetes services for accessing Vault deployments.
The service always exposes the active Vault node. It hides failures by switching the service pointer to the currently active node when failover occurs.
The name and namespace of the service are the same as the Vault resource. For example, if the Vault resource's name is example
and the namespace is default
, the service's name and namespace will also be example
and default
respectively.
Applications in the Kubernetes pod network can access the service through https://example.default.svc:8200
.
A standby Vault node is initialized and unsealed, but does not hold the leader election lock. The standby node cannot serve user requests. It forwards user requests to the active node. If the active node goes down, a standby node becomes the active node.
-
Unseal the next sealed node.
See Unsealing a sealed node for more information.
-
Verify that the node becomes standby:
$ kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.standby}' [example-1003480066-jzmwd]
The setup now contains an active and a standby Vault node.
In an HA Vault setup, when the active node goes down the standby node takes over the active role and starts serving client requests.
To see how it works, terminate the active node, and wait for the standby node to become active.
-
Terminate the active Vault node:
kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.active}' | xargs -0 -I {} kubectl -n default delete po {}
The standby node becomes active.
-
Verify that the previous standby node is now active:
$ kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.active}' example-1003480066-jzmwd
Regular vault operations like reading and writing a secret to the active node should now succeed.
Vault-operator recovers any inactive or terminated Vault pods to maintain the size of cluster.
To see how it works, perform the following:
-
Ensure that a Vault node is terminated.
If a node has not yet been terminated, follow the instructions in the Automated failover section above.
-
Verify that the newly sealed Vault node is created:
$ kubectl -n default get vault example -o jsonpath='{.status.vaultStatus.sealed}' [example-994933690-h066h]
A new Vault node is created to replace the terminated one. Unseal the node and continue using HA.