You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
We're using the following values.yaml in json format (not HCL). At no point we explicitly assign a value for VAULT_DISABLE_MLOCK env variable.
The above configuration gets converted into the following k8s secret. The issue is that it has an unwanted "disable_mlock = true" line outside of the json formatted configuration.
I encountered this same issue. I read the docs https://developer.hashicorp.com/vault/docs/platform/k8s/helm/configuration#config and since it mentioned A raw string of extra HCL or JSON [configuration](https://developer.hashicorp.com/vault/docs/configuration) for Vault servers I also tried this raw string of json approach.
After seeing that, and looking at the documentation more clearly and seeing [config] (string or object: "{}") - I set it to a yaml object instead and it templated correctly.
This does seem to work, although helm spits out a nasty warning
coalesce.go:289: warning: destination for vault.server.ha.config is a table. Ignoring non-table value (ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "consul" {
path = "vault"
address = "HOST_IP:8500"
}
service_registration "kubernetes" {}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
# project = "vault-helm-dev-246514"
# region = "global"
# key_ring = "vault-helm-unseal-kr"
# crypto_key = "vault-helm-unseal-key"
#}
# Example configuration for enabling Prometheus metrics.
# If you are using Prometheus Operator you can enable a ServiceMonitor resource below.
# You may wish to enable unauthenticated metrics in the listener block above.
#telemetry {
# prometheus_retention_time = "30s"
# disable_hostname = true
#}
)
coalesce.go:289: warning: destination for vault.server.standalone.config is a table. Ignoring non-table value (ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
#telemetry {
# unauthenticated_metrics_access = "true"
#}
}
storage "file" {
path = "/vault/data"
}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
# project = "vault-helm-dev"
# region = "global"
# key_ring = "vault-helm-unseal-kr"
# crypto_key = "vault-helm-unseal-key"
#}
# Example configuration for enabling Prometheus metrics in your config.
#telemetry {
# prometheus_retention_time = "30s"
# disable_hostname = true
#}
)
I think this is just expected though since there's 2 possible types and it can't merge the table type with the raw string's current value
Describe the bug
We're using the following values.yaml in json format (not HCL). At no point we explicitly assign a value for VAULT_DISABLE_MLOCK env variable.
The above configuration gets converted into the following k8s secret. The issue is that it has an unwanted "disable_mlock = true" line outside of the json formatted configuration.
This leads to vault pods getting stuck in CrashLoopBackOff status with the following error
vault chart version: v0.27.0
tested with everything down to v0.20.0 - the issue appears everywhere
To Reproduce
Steps to reproduce the behavior:
helm upgrade vault hashicorp/vault --namespace vault --create-namespace --install --wait -f charts/vault/values.yaml
Expected behavior
Expected config map vault-config to be a valid json without any additional HCL formatted properties
The text was updated successfully, but these errors were encountered: