Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

json formatted server config converts to a freak vault-config k8s secret which is both hcl and json #1009

Open
dbdimitrov83 opened this issue Mar 24, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@dbdimitrov83
Copy link

dbdimitrov83 commented Mar 24, 2024

Describe the bug
We're using the following values.yaml in json format (not HCL). At no point we explicitly assign a value for VAULT_DISABLE_MLOCK env variable.

  ha:
    enabled: true
    replicas: 5
    raft:
      enabled: true
      setNodeId: true
      config: |
        {
          "disable_mlock": true,
          "ui": true,
          "api_addr": "https://HOSTNAME.vault-internal:8200",
          "cluster_addr": "https://HOSTNAME.vault-internal:8201",
          "listener": [
              {
                  "tcp": {
                      "address": "[::]:8200",
                      "cluster_address": "[::]:8201",
                      "tls_cert_file": "/vault/userconfig/vault-tls/tls.crt",
                      "tls_key_file": "/vault/userconfig/vault-tls/tls.key",
                      "tls_client_ca_file": "/vault/userconfig/vault-tls/ca.crt"
                  }
              },
          ],
          "storage": {
              "raft": {
                  "path": "/vault/data",
                  "retry_join": {
                      "auto_join": "provider=k8s namespace=vault label_selector=\"component=server,app.kubernetes.io/instance=vault\"",
                      "auto_join_scheme": "https",
                      "leader_ca_cert_file": "/vault/userconfig/vault-tls/ca.crt",
                      "leader_tls_servername": "HOSTNAME.vault-internal"
                  }
              }
          },
          "service_registration": {
              "kubernetes": {}
          },
          "seal": {
              "gcpckms": {}
          },
          "replication": {
              "resolver_discover_servers": true,
          },
          "user_lockout": {
              "all": {
                  "disable_lockout": "true"
              }
          },
          "plugin_directory": "/vault/plugins"
        }

The above configuration gets converted into the following k8s secret. The issue is that it has an unwanted "disable_mlock = true" line outside of the json formatted configuration.

+ # Source: vault/templates/server-config-configmap.yaml
+ ***
+ ***Map
+ metadata:
+   name: vault-config
+   namespace: vault
+   labels:
+     helm.sh/chart: vault-0.27.0
+     app.kubernetes.io/name: vault
+     app.kubernetes.io/instance: vault
+     app.kubernetes.io/managed-by: Helm
+ data:
+   extraconfig-from-values.hcl: |-
+     disable_mlock = true
+     {
+       "disable_mlock": true,
+       "ui": true,
+       "api_addr": "https://hostname.vault-internal:8200/",
+       "cluster_addr": "https://hostname.vault-internal:8201/",
+       "listener": [
+           {
....

This leads to vault pods getting stuck in CrashLoopBackOff status with the following error

error loading configuration from /tmp/storageconfig.hcl: At 2:1: expected: IDENT | STRING got: LBRACE

vault chart version: v0.27.0
tested with everything down to v0.20.0 - the issue appears everywhere

To Reproduce
Steps to reproduce the behavior:

  1. Install chart
  2. Run
    helm upgrade vault hashicorp/vault --namespace vault --create-namespace --install --wait -f charts/vault/values.yaml
  3. See error (vault logs, etc.)

Expected behavior
Expected config map vault-config to be a valid json without any additional HCL formatted properties

@dbdimitrov83 dbdimitrov83 added the bug Something isn't working label Mar 24, 2024
@alex-bezek
Copy link

I encountered this same issue. I read the docs https://developer.hashicorp.com/vault/docs/platform/k8s/helm/configuration#config and since it mentioned A raw string of extra HCL or JSON [configuration](https://developer.hashicorp.com/vault/docs/configuration) for Vault servers I also tried this raw string of json approach.

Upon getting that error, I looked into the template helper https://github.com/hashicorp/vault-helm/blob/main/templates/_helpers.tpl#L1086-L1096 and see if its a string its always going to add the disable_mlock = true.

After seeing that, and looking at the documentation more clearly and seeing [config] (string or object: "{}") - I set it to a yaml object instead and it templated correctly.

This does seem to work, although helm spits out a nasty warning

coalesce.go:289: warning: destination for vault.server.ha.config is a table. Ignoring non-table value (ui = true

listener "tcp" {
  tls_disable = 1
  address = "[::]:8200"
  cluster_address = "[::]:8201"
}
storage "consul" {
  path = "vault"
  address = "HOST_IP:8500"
}

service_registration "kubernetes" {}

# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
#   project     = "vault-helm-dev-246514"
#   region      = "global"
#   key_ring    = "vault-helm-unseal-kr"
#   crypto_key  = "vault-helm-unseal-key"
#}

# Example configuration for enabling Prometheus metrics.
# If you are using Prometheus Operator you can enable a ServiceMonitor resource below.
# You may wish to enable unauthenticated metrics in the listener block above.
#telemetry {
#  prometheus_retention_time = "30s"
#  disable_hostname = true
#}
)
coalesce.go:289: warning: destination for vault.server.standalone.config is a table. Ignoring non-table value (ui = true

listener "tcp" {
  tls_disable = 1
  address = "[::]:8200"
  cluster_address = "[::]:8201"
  # Enable unauthenticated metrics access (necessary for Prometheus Operator)
  #telemetry {
  #  unauthenticated_metrics_access = "true"
  #}
}
storage "file" {
  path = "/vault/data"
}

# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
#   project     = "vault-helm-dev"
#   region      = "global"
#   key_ring    = "vault-helm-unseal-kr"
#   crypto_key  = "vault-helm-unseal-key"
#}

# Example configuration for enabling Prometheus metrics in your config.
#telemetry {
#  prometheus_retention_time = "30s"
#  disable_hostname = true
#}
)

I think this is just expected though since there's 2 possible types and it can't merge the table type with the raw string's current value

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants