Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When deploying with skaffold invalid spec.tolerations are introduced #167

Open
asfourco opened this issue Jun 25, 2024 · 2 comments
Open
Labels
bug Something isn't working

Comments

@asfourco
Copy link

Context:

Helm chart version: 2.18.1

We are using skaffold and kustomize to build and deploy uptime-kuma to our GKE cluster in our GCP project. Normal flow is verify that the manifest is properly constructed using skaffold render, and then deploy with skaffold deploy

Issue

With this new helm chart version, new tolerations are introduced incorrectly, error message:

skaffold deploy
Starting deploy...
 - serviceaccount/uptime-kuma unchanged
 - persistentvolumeclaim/uptime-kuma-pvc unchanged
 - ingress.networking.k8s.io/uptime-kuma configured
 - pod/uptime-kuma-test-connection configured
 - Error from server (Invalid): error when applying patch:
 - {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"
 {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/instance\":\"uptime-
 kuma\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"uptime-
 kuma\",\"app.kubernetes.io/version\":\"1.23.13\",\"helm.sh/chart\":\"uptime-kuma-2.18.1\",\"skaffold.dev/run-id\":\"7f18b68f-
 680a-4d69-b18d-ec521b94de34\"},\"name\":\"uptime-kuma\",\"namespace\":\"monitoring\"},\"spec\":{\"ports\":
 [{\"name\":\"http\",\"port\":3001,\"protocol\":\"TCP\",\"targetPort\":3001}],\"selector\":
 {\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\"},\"tolerations\":
[{\"effect\":\"NoSchedule\",\"key\":\"kubernetes.io/arch\",\"operator\":\"Equal\",\"value\":\"arm64\"}],\"type\":\"ClusterIP\"}}\n"}
,"labels":{"skaffold.dev/run-id":"7f18b68f-680a-4d69-b18d-ec521b94de34"}},"spec":{"tolerations":
[{"effect":"NoSchedule","key":"kubernetes.io/arch","operator":"Equal","value":"arm64"}]}}
 - to:
 - Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service"
 - Name: "uptime-kuma", Namespace: "monitoring"
 - for: "STDIN": error when patching "STDIN":  "" is invalid: patch: Invalid value: 
 "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:
 {\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/instance\":\"uptime-
 kuma\",\"app.kubernetes.io/managed-by\":\"Helm\",\"app.kubernetes.io/name\":\"uptime-
 kuma\",\"app.kubernetes.io/version\":\"1.23.13\",\"helm.sh/chart\":\"uptime-kuma-2.18.1\",\"skaffold.dev/run-id\":\"7f18b68f-
 680a-4d69-b18d-ec521b94de34\"},\"name\":\"uptime-kuma\",\"namespace\":\"monitoring\"},\"spec\":{\"ports\":
 [{\"name\":\"http\",\"port\":3001,\"protocol\":\"TCP\",\"targetPort\":3001}],\"selector\":
 {\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\"},\"tolerations\":
 [{\"effect\":\"NoSchedule\",\"key\":\"kubernetes.io/arch\",\"operator\":\"Equal\",\"value\":\"arm64\"}],\"type\":\"ClusterIP\"}}\n] 
 labels:map[skaffold.dev/run-id:7f18b68f-680a-4d69-b18d-ec521b94de34]] spec:map[tolerations:[map[effect:NoSchedule 
 key:kubernetes.io/arch operator:Equal value:arm64]]]]": strict decoding error: unknown field "spec.tolerations"
 
 - Error from server (Invalid): error when applying patch:
 
 - {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"
 {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":
 {\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/managed-
 by\":\"Helm\",\"app.kubernetes.io/name\":\"uptime-
 kuma\",\"app.kubernetes.io/version\":\"1.23.13\",\"helm.sh/chart\":\"uptime-kuma-2.18.1\",\"skaffold.dev/run-id\":\"7f18b68f-
 680a-4d69-b18d-ec521b94de34\"},\"name\":\"uptime-kuma\",\"namespace\":\"monitoring\"},\"spec\":
 {\"replicas\":1,\"selector\":{\"matchLabels\":{\"app.kubernetes.io/instance\":\"uptime-
 kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\"}},\"strategy\":{\"type\":\"Recreate\"},\"template\":{\"metadata\":
 {\"labels\":{\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\",\"skaffold.dev/run-
 id\":\"7f18b68f-680a-4d69-b18d-ec521b94de34\"}},\"spec\":{\"containers\":[{\"env\":
 [{\"name\":\"UPTIME_KUMA_PORT\",\"value\":\"3001\"}],\"image\":\"louislam/uptime-kuma:1.23.13-
 debian\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"exec\":{\"command\":
 [\"extra/healthcheck\"]},\"initialDelaySeconds\":15,\"timeoutSeconds\":2},\"name\":\"uptime-kuma\",\"ports\":
 [{\"containerPort\":3001,\"name\":\"http\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"httpGet\":
 {\"path\":\"/\",\"port\":3001,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":5},\"resources\":{\"requests\":
 {\"cpu\":\"250m\",\"memory\":\"64Mi\"}},\"securityContext\":{},\"volumeMounts\":
 [{\"mountPath\":\"/app/data\",\"name\":\"storage\"}]}],\"nodeSelector\":
 {\"app/service\":\"public\"},\"securityContext\":{},\"serviceAccountName\":\"uptime-kuma\",\"tolerations\":
 [{\"effect\":\"NoSchedule\",\"key\":\"app/service\",\"operator\":\"Equal\",\"value\":\"public\"}],\"volumes\":
 [{\"name\":\"storage\",\"persistentVolumeClaim\":{\"claimName\":\"uptime-kuma-pvc\"}}]}},\"tolerations\":
 [{\"effect\":\"NoSchedule\",\"key\":\"kubernetes.io/arch\",\"operator\":\"Equal\",\"value\":\"arm64\"}]}}\n"},"labels":
 {"skaffold.dev/run-id":"7f18b68f-680a-4d69-b18d-ec521b94de34"}},"spec":{"template":{"metadata":{"labels":
 {"skaffold.dev/run-id":"7f18b68f-680a-4d69-b18d-ec521b94de34"}}},"tolerations":
 [{"effect":"NoSchedule","key":"kubernetes.io/arch","operator":"Equal","value":"arm64"}]}}
 
 - to:
 
 - Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
 
 - Name: "uptime-kuma", Namespace: "monitoring"
 
 - for: "STDIN": error when patching "STDIN":  "" is invalid: patch: Invalid value: 
 "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:
 {\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":
 {\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/managed-
 by\":\"Helm\",\"app.kubernetes.io/name\":\"uptime-
 kuma\",\"app.kubernetes.io/version\":\"1.23.13\",\"helm.sh/chart\":\"uptime-kuma-2.18.1\",\"skaffold.dev/run-id\":\"7f18b68f-
 680a-4d69-b18d-ec521b94de34\"},\"name\":\"uptime-kuma\",\"namespace\":\"monitoring\"},\"spec\":
 {\"replicas\":1,\"selector\":{\"matchLabels\":{\"app.kubernetes.io/instance\":\"uptime-
 kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\"}},\"strategy\":{\"type\":\"Recreate\"},\"template\":{\"metadata\":
 {\"labels\":{\"app.kubernetes.io/instance\":\"uptime-kuma\",\"app.kubernetes.io/name\":\"uptime-kuma\",\"skaffold.dev/run-
 id\":\"7f18b68f-680a-4d69-b18d-ec521b94de34\"}},\"spec\":{\"containers\":[{\"env\":
 [{\"name\":\"UPTIME_KUMA_PORT\",\"value\":\"3001\"}],\"image\":\"louislam/uptime-kuma:1.23.13-
 debian\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"exec\":{\"command\":
 [\"extra/healthcheck\"]},\"initialDelaySeconds\":15,\"timeoutSeconds\":2},\"name\":\"uptime-kuma\",\"ports\":
 [{\"containerPort\":3001,\"name\":\"http\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"httpGet\":
 {\"path\":\"/\",\"port\":3001,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":5},\"resources\":{\"requests\":
 {\"cpu\":\"250m\",\"memory\":\"64Mi\"}},\"securityContext\":{},\"volumeMounts\":
 [{\"mountPath\":\"/app/data\",\"name\":\"storage\"}]}],\"nodeSelector\":
 {\"app/service\":\"public\"},\"securityContext\":{},\"serviceAccountName\":\"uptime-kuma\",\"tolerations\":
 [{\"effect\":\"NoSchedule\",\"key\":\"app/service\",\"operator\":\"Equal\",\"value\":\"public\"}],\"volumes\":
 [{\"name\":\"storage\",\"persistentVolumeClaim\":{\"claimName\":\"uptime-kuma-pvc\"}}]}},\"tolerations\":
 [{\"effect\":\"NoSchedule\",\"key\":\"kubernetes.io/arch\",\"operator\":\"Equal\",\"value\":\"arm64\"}]}}\n] 
 labels:map[skaffold.dev/run-id:7f18b68f-680a-4d69-b18d-ec521b94de34]] 
 spec:map[template:map[metadata:map[labels:map[skaffold.dev/run-id:7f18b68f-680a-4d69-b18d-ec521b94de34]]] 
 tolerations:[map[effect:NoSchedule key:kubernetes.io/arch operator:Equal value:arm64]]]]": strict decoding error: unknown 
 field "spec.tolerations"

Workaround

  1. Render the manifest: skaffold render > deployment.yaml
  2. Remove the spec.tolerations in the Service and Deployment sections
  3. Deploy with kubectl apply -f deployment.yaml
@dirsigler dirsigler added the bug Something isn't working label Jun 27, 2024
@1ms-ms
Copy link

1ms-ms commented Sep 13, 2024

@asfourco does it occur in current chart version as well? I can see in deployment.yaml that tolerations are places correctly + I can deploy the whole chart. Can you show your skaffold.yaml and values.yaml?

@asfourco
Copy link
Author

asfourco commented Oct 13, 2024

Apologies for the late reply. Below are the skaffold.yaml and values.yaml content.

# skaffold.yaml
---
apiVersion: skaffold/v4beta10
kind: Config

manifests:
  kustomize:
    paths:
      - .
    buildArgs:
      - --enable-helm

deploy:
  kubeContext: dev_northamerica-northeast2_asfourco-dev
  kubectl:
    defaultNamespace: monitoring


# values.yaml
---

serviceAccount:
  create: true
  name: uptime-kuma

ingress:
  enabled: true
  className: main-kong
  annotations:
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod-dns
    konghq.com/protocols: "https"
    konghq.com/https-redirect-status-code: "308"
  hosts:
    - host: uptime.dev.asfourco.app
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: uptime-kuma-tls
      hosts:
        - uptime.dev.asfourco.app

resources:
  requests:
    memory: 64Mi
    cpu: 250m

nodeSelector:
  gingerstack.app/service: public

tolerations:
  - key: asfourco.app/service
    operator: Equal
    value: public
    effect: NoSchedule

volume:
  storageClassName: asfourco-hdd

Yes, the issue still occurs with the current helm chart version. The following is the product of skaffold render using the new uptime-kuma helm chart:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma
  namespace: monitoring
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma
  namespace: monitoring
spec:
  ports:
    - name: http
      port: 3001
      protocol: TCP
      targetPort: 3001
  selector:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/name: uptime-kuma

# this definition causes an error
- tolerations:
-    - effect: NoSchedule
-      key: kubernetes.io/arch
-      operator: Equal
-      value: arm64

  type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma-pvc
  namespace: monitoring
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: asfourco-hdd
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: uptime-kuma
      app.kubernetes.io/name: uptime-kuma
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: uptime-kuma
        app.kubernetes.io/name: uptime-kuma
    spec:
      automountServiceAccountToken: true
      containers:
        - env:
            - name: UPTIME_KUMA_PORT
              value: "3001"
          image: louislam/uptime-kuma:1.23.13-debian
          imagePullPolicy: IfNotPresent
          livenessProbe:
            exec:
              command:
                - extra/healthcheck
            failureThreshold: 3
            initialDelaySeconds: 180
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 2
          name: uptime-kuma
          ports:
            - containerPort: 3001
              name: http
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /
              port: 3001
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 250m
              memory: 64Mi
          securityContext: {}
          volumeMounts:
            - mountPath: /app/data
              name: storage
      nodeSelector:
        gingerstack.app/service: public
      securityContext: {}
      serviceAccountName: uptime-kuma
      tolerations:
        - effect: NoSchedule
          key: asfourco.app/service
          operator: Equal
          value: public
      volumes:
        - name: storage
          persistentVolumeClaim:
            claimName: uptime-kuma-pvc

# This is an extra definition that causes the error
-  tolerations:
-    - effect: NoSchedule
-      key: kubernetes.io/arch
-      operator: Equal
-      value: arm64
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-dns
    konghq.com/https-redirect-status-code: "308"
    konghq.com/protocols: https
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/server-snippets: |
      location / {
        proxy_set_header Upgrade $http_upgrade;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_cache_bypass $http_upgrade;
      }
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma
  namespace: monitoring
spec:
  ingressClassName: main-kong
  rules:
    - host: uptime.dev.asfourco.app
      http:
        paths:
          - backend:
              service:
                name: uptime-kuma
                port:
                  number: 3001
            path: /
            pathType: Prefix
  tls:
    - hosts:
        - uptime.dev.asfourco.app
      secretName: uptime-kuma-tls
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    helm.sh/hook: test
  labels:
    app.kubernetes.io/instance: uptime-kuma
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: uptime-kuma
    app.kubernetes.io/version: 1.23.13
    helm.sh/chart: uptime-kuma-2.20.0
  name: uptime-kuma-test-connection
  namespace: monitoring
spec:
  containers:
    - args:
        - uptime-kuma:3001
      command:
        - wget
      image: busybox
      name: wget
  restartPolicy: Never
  tolerations:
    - effect: NoSchedule
      key: kubernetes.io/arch
      operator: Equal
      value: arm64

For reference:

$ skaffold version
v2.13.2
$ kustomize version
v5.5.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants