-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When deploying with skaffold invalid spec.tolerations are introduced #167
Comments
@asfourco does it occur in current chart version as well? I can see in deployment.yaml that |
Apologies for the late reply. Below are the # skaffold.yaml
---
apiVersion: skaffold/v4beta10
kind: Config
manifests:
kustomize:
paths:
- .
buildArgs:
- --enable-helm
deploy:
kubeContext: dev_northamerica-northeast2_asfourco-dev
kubectl:
defaultNamespace: monitoring
# values.yaml
---
serviceAccount:
create: true
name: uptime-kuma
ingress:
enabled: true
className: main-kong
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod-dns
konghq.com/protocols: "https"
konghq.com/https-redirect-status-code: "308"
hosts:
- host: uptime.dev.asfourco.app
paths:
- path: /
pathType: Prefix
tls:
- secretName: uptime-kuma-tls
hosts:
- uptime.dev.asfourco.app
resources:
requests:
memory: 64Mi
cpu: 250m
nodeSelector:
gingerstack.app/service: public
tolerations:
- key: asfourco.app/service
operator: Equal
value: public
effect: NoSchedule
volume:
storageClassName: asfourco-hdd Yes, the issue still occurs with the current helm chart version. The following is the product of apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma
namespace: monitoring
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma
namespace: monitoring
spec:
ports:
- name: http
port: 3001
protocol: TCP
targetPort: 3001
selector:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/name: uptime-kuma
# this definition causes an error
- tolerations:
- - effect: NoSchedule
- key: kubernetes.io/arch
- operator: Equal
- value: arm64
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma-pvc
namespace: monitoring
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: asfourco-hdd
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/name: uptime-kuma
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/name: uptime-kuma
spec:
automountServiceAccountToken: true
containers:
- env:
- name: UPTIME_KUMA_PORT
value: "3001"
image: louislam/uptime-kuma:1.23.13-debian
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- extra/healthcheck
failureThreshold: 3
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
name: uptime-kuma
ports:
- containerPort: 3001
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 3001
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 64Mi
securityContext: {}
volumeMounts:
- mountPath: /app/data
name: storage
nodeSelector:
gingerstack.app/service: public
securityContext: {}
serviceAccountName: uptime-kuma
tolerations:
- effect: NoSchedule
key: asfourco.app/service
operator: Equal
value: public
volumes:
- name: storage
persistentVolumeClaim:
claimName: uptime-kuma-pvc
# This is an extra definition that causes the error
- tolerations:
- - effect: NoSchedule
- key: kubernetes.io/arch
- operator: Equal
- value: arm64
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-dns
konghq.com/https-redirect-status-code: "308"
konghq.com/protocols: https
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/server-snippets: |
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_cache_bypass $http_upgrade;
}
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma
namespace: monitoring
spec:
ingressClassName: main-kong
rules:
- host: uptime.dev.asfourco.app
http:
paths:
- backend:
service:
name: uptime-kuma
port:
number: 3001
path: /
pathType: Prefix
tls:
- hosts:
- uptime.dev.asfourco.app
secretName: uptime-kuma-tls
---
apiVersion: v1
kind: Pod
metadata:
annotations:
helm.sh/hook: test
labels:
app.kubernetes.io/instance: uptime-kuma
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: uptime-kuma
app.kubernetes.io/version: 1.23.13
helm.sh/chart: uptime-kuma-2.20.0
name: uptime-kuma-test-connection
namespace: monitoring
spec:
containers:
- args:
- uptime-kuma:3001
command:
- wget
image: busybox
name: wget
restartPolicy: Never
tolerations:
- effect: NoSchedule
key: kubernetes.io/arch
operator: Equal
value: arm64 For reference: $ skaffold version
v2.13.2
$ kustomize version
v5.5.0 |
Context:
Helm chart version: 2.18.1
We are using skaffold and kustomize to build and deploy uptime-kuma to our GKE cluster in our GCP project. Normal flow is verify that the manifest is properly constructed using
skaffold render
, and then deploy withskaffold deploy
Issue
With this new helm chart version, new tolerations are introduced incorrectly, error message:
Workaround
skaffold render > deployment.yaml
spec.tolerations
in the Service and Deployment sectionskubectl apply -f deployment.yaml
The text was updated successfully, but these errors were encountered: