Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LivenessProbes configuration breaks statefulset #1827

Open
71g3pf4c3 opened this issue Sep 23, 2024 · 2 comments
Open

LivenessProbes configuration breaks statefulset #1827

71g3pf4c3 opened this issue Sep 23, 2024 · 2 comments
Labels

Comments

@71g3pf4c3
Copy link

Report

If perconaxtradbcluster.spec.haproxy.livenessProbes.httpGet field is added, statefullset is generated with both default exec probe and httpGet probe, which breaks statefulset spec

More about the problem

PerconaXtraDBCluster manifest (just https://github.com/percona/percona-xtradb-cluster-operator/blob/v1.15.0/deploy/cr.yaml with added httpGet probe):

apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - percona.com/delete-pxc-pods-in-order
spec:
  crVersion: 1.15.0
  tls:
    enabled: true
  updateStrategy: SmartUpdate
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: disabled
    schedule: "0 4 * * *"
  pxc:
    size: 3
    image: percona/percona-xtradb-cluster:8.0.36-28.1
    autoRecovery: true
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 6G
    gracePeriod: 600
  haproxy:
    livenessProbes:
      httpGet:
        path: /healthz
        port: 8080
        scheme: HTTP
    enabled: true
    size: 3
    image: percona/haproxy:2.8.5
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  proxysql:
    enabled: false
    size: 3
    image: percona/proxysql2:2.5.5
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    volumeSpec:
      persistentVolumeClaim:
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
  logcollector:
    enabled: true
    image: percona/percona-xtradb-cluster-operator:1.15.0-logcollector-fluentbit3.1.4
    resources:
      requests:
        memory: 100M
        cpu: 200m
  pmm:
    enabled: false
    image: percona/pmm-client:2.42.0
    serverHost: monitoring-service
    resources:
      requests:
        memory: 150M
        cpu: 300m
  backup:
    image: percona/percona-xtradb-cluster-operator:1.15.0-pxc8.0-backup-pxb8.0.35
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
      timeoutSeconds: 60
    storages:
      s3-us-west:
        type: s3
        verifyTLS: true
        s3:
          bucket: S3-BACKUP-BUCKET-NAME-HERE
          credentialsSecret: my-cluster-name-backup-s3
          region: us-west-2
      azure-blob:
        type: azure
        azure:
          credentialsSecret: azure-secret
          container: test
      fs-pvc:
        type: filesystem
        volume:
          persistentVolumeClaim:
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 6G
    schedule:
      - name: "daily-backup"
        schedule: "0 0 * * *"
        keep: 5
        storageName: fs-pvc

Generated haproxy statefulset snippet:

        livenessProbe:
          exec:
            command:
            - /opt/percona/haproxy_liveness_check.sh
          failureThreshold: 4
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5

Steps to reproduce

  1. Install kind
  2. Install pxc-operator
  3. Apply attached manifest

Versions

  1. Kubernetes 1.29.2 (kind)
  2. Operator 1.15.0
  3. Database percona/percona-xtradb-cluster:8.0.36-28.1

Anything else?

No response

@71g3pf4c3 71g3pf4c3 added the bug label Sep 23, 2024
@hors
Copy link
Collaborator

hors commented Sep 26, 2024

Hi @71g3pf4c3, do you want to have the possibility of setting custom LivenessProbes via CR?

@71g3pf4c3
Copy link
Author

Hi @71g3pf4c3, do you want to have the possibility of setting custom LivenessProbes via CR?

Yes, i want to use http probes instead of exec to reduce forks count on worker node, many pxc and haproxy instances cause high system saturation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants