Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(helm): update chart rook-ceph-cluster to v1.16.0 #142

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Feb 10, 2024

This PR contains the following updates:

Package Update Change
rook-ceph-cluster minor v1.13.3 -> v1.16.0

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

rook/rook (rook-ceph-cluster)

v1.16.0

Compare Source

v1.15.7

Compare Source

Improvements

Rook v1.15.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.6

Compare Source

Improvements

Rook v1.15.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.5

Compare Source

Improvements

Rook v1.15.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.4

Compare Source

Improvements

Rook v1.15.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.3

Compare Source

Improvements

Rook v1.15.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.2

Compare Source

Improvements

Rook v1.15.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.1

Compare Source

Improvements

Rook v1.15.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.15.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes

  • Minimum version of Kubernetes supported is increased to K8s v1.26.
  • During CephBlockPool updates, Rook will now return an error if an invalid device class is specified. Pools with invalid device classes may start failing until the correct device class is specified. For more details, see #​14057.
  • Rook has deprecated CSI network "holder" pods. If there are pods named csi-*plugin-holder-* in the Rook operator namespace, see the detailed documentation to disable them. This deprecation process will be required before upgrading to the future Rook v1.16.
  • Ceph COSI driver images have been updated. This impacts existing COSI Buckets, BucketClaims, and BucketAccesses. Update existing clusters following the guide here.
  • CephObjectStore, CephObjectStoreUser, and OBC endpoint behavior has changed when CephObjectStore spec.hosting configurations are set. Use the new spec.hosting.advertiseEndpoint config to define required behavior as documented.

Features

  • Added support for Ceph Squid (v19), in addition to Reef (v18) and Quincy (v17). Quincy support will be removed in Rook v1.16.
  • Ceph-CSI driver v3.12, including new options for RBD, log rotation, and updated sidecar images.
  • Allow updating the device class of OSDs, if allowDeviceClassUpdate: true is set in the CephCluster CR.
  • Allow updating the weight of an OSD, if allowOsdCrushWeightUpdate: true is set in the CephCluster CR.
  • Use fully-qualified image names (docker.io/rook/ceph) in operator manifests and helm charts.

Experimental Features

  • CephObjectStore support for keystone authentication for S3 and Swift. See the Object store documentation to configure.
  • CSI operator: CSI settings are moving to CRs managed by a new operator. Once enabled, Rook will convert the settings previously defined in the operator configmap or env vars into the new CRs managed by the CSI operator. There are two steps to enable:

v1.14.12

Compare Source

Improvements

Rook v1.14.12 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.11

Compare Source

Improvements

Rook v1.14.11 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.10

Compare Source

Improvements

Rook v1.14.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.9

Compare Source

Improvements

Rook v1.14.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.8

Compare Source

Improvements

Rook v1.14.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.7

Compare Source

What's Changed

monitoring: fix CephPoolGrowthWarning expression (#​14346, @​matofeder)
monitoring: Set honor labels on the service monitor (#​14339, @​travisn)

Full Changelog: rook/rook@v1.14.6...v1.14.7

v1.14.6

Compare Source

What's Changed

v1.14.5

Compare Source

Improvements

Rook v1.14.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.4

Compare Source

Improvements

Rook v1.14.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.3

Compare Source

Improvements

Rook v1.14.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.2

Compare Source

Improvements

Rook v1.14.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.1

Compare Source

Improvements

Rook v1.14.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.14.0

Compare Source

Upgrade Guide

To upgrade from previous versions of Rook, see the Rook upgrade guide.

Breaking Changes

  • The minimum supported version of Kubernetes is v1.25. Upgrade to Kubernetes v1.25 or higher before upgrading Rook.
  • The image repository and tag settings are specified separately in the helm chart values.yaml for the CSI images. Helm users previously specifying the CSI images with the image setting will need to update their values.yaml with the separate repository and tag settings.
  • Rook is beginning the process of deprecating CSI network "holder" pods. If there are pods named csi-*plugin-holder-* in the Rook operator namespace, see the holder pod deprecation documentation to disable them. Migration of affected clusters is optional for v1.14, but will be required in a future release.
  • The Rook operator config CSI_ENABLE_READ_AFFINITY was removed. v1.13 clusters that have modified this value to be "true" must set the option as desired in each CephCluster as documented here before upgrading to v1.14.

Features

  • Kubernetes versions v1.25 through v1.29 are supported. K8s v1.30 will be supported as soon as released.
  • Ceph daemon pods using the default service account now use a new rook-ceph-default service account.
  • A custom Ceph application can be applied to a CephBlockPool CR.
  • Object stores can be created with shared metadata and data pools. Isolation between object stores is enabled via RADOS namespaces. This configuration is recommended to limit the number of pools when multiple object stores are created.
  • Support for VolumeSnapshotGroup is available for the RBD and CephFS CSI drivers.
  • Support for virtual style hosting for s3 buckets is added in the CephObjectStore, by adding hosting.dnsNames to the object store.
  • A static prefix can be specified for the CSI drivers and OBC provisioner (the default prefix is the rook-ceph namespace).
  • Azure Key Vault KMS support is added for storing OSD encryption keys.
  • Additional status columns added to the kubectl output for Rook CRDs.

v1.13.10

Compare Source

Improvements

Rook v1.13.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.9

Compare Source

Improvements

Rook v1.13.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.8

Compare Source

Improvements

Rook v1.13.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.7

Compare Source

Improvements

Rook v1.13.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.6

Compare Source

Improvements

Rook v1.13.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.5

Compare Source

Improvements

Rook v1.13.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.

v1.13.4

Compare Source

Improvements

Rook v1.13.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.


Configuration

📅 Schedule: Branch creation - "on saturday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Feb 10, 2024

--- kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

+++ kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster

@@ -13,13 +13,13 @@

     spec:
       chart: rook-ceph-cluster
       sourceRef:
         kind: HelmRepository
         name: rook-ceph
         namespace: flux-system
-      version: v1.13.3
+      version: v1.16.0
   dependsOn:
   - name: rook-ceph-operator
     namespace: rook-ceph
   - name: snapshot-controller
     namespace: storage
   install:

Copy link

github-actions bot commented Feb 10, 2024

--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools

@@ -17,13 +17,13 @@

         app: rook-ceph-tools
     spec:
       dnsPolicy: ClusterFirstWithHostNet
       hostNetwork: true
       containers:
       - name: rook-ceph-tools
-        image: quay.io/ceph/ceph:v18.2.1
+        image: quay.io/ceph/ceph:v19.2.0
         command:
         - /bin/bash
         - -c
         - |
           # Replicate the script from toolbox.sh inline so the ceph image
           # can be run directly, instead of requiring the rook toolbox
@@ -101,24 +101,24 @@

           valueFrom:
             secretKeyRef:
               name: rook-ceph-mon
               key: ceph-username
         resources:
           limits:
-            cpu: 500m
             memory: 1Gi
           requests:
             cpu: 100m
             memory: 128Mi
         volumeMounts:
         - mountPath: /etc/ceph
           name: ceph-config
         - name: mon-endpoint-volume
           mountPath: /etc/rook
         - name: ceph-admin-secret
           mountPath: /var/lib/rook-ceph-mon
+      serviceAccountName: rook-ceph-default
       volumes:
       - name: ceph-admin-secret
         secret:
           secretName: rook-ceph-mon
           optional: false
           items:
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph

@@ -6,13 +6,13 @@

   namespace: rook-ceph
 spec:
   monitoring:
     enabled: true
   cephVersion:
     allowUnsupported: false
-    image: quay.io/ceph/ceph:v18.2.1
+    image: quay.io/ceph/ceph:v19.2.0
   cleanupPolicy:
     allowUninstallWithVolumes: false
     confirmation: ''
     sanitizeDisks:
       dataSource: zero
       iteration: 1
@@ -87,34 +87,30 @@

     mon: system-node-critical
     osd: system-node-critical
   removeOSDsIfOutAndSafeToRemove: false
   resources:
     cleanup:
       limits:
-        cpu: 500m
         memory: 1Gi
       requests:
         cpu: 500m
         memory: 100Mi
     crashcollector:
       limits:
-        cpu: 500m
         memory: 60Mi
       requests:
         cpu: 100m
         memory: 60Mi
     exporter:
       limits:
-        cpu: 250m
         memory: 128Mi
       requests:
         cpu: 50m
         memory: 50Mi
     logcollector:
       limits:
-        cpu: 500m
         memory: 1Gi
       requests:
         cpu: 100m
         memory: 100Mi
     mgr:
       limits:
@@ -122,13 +118,12 @@

         memory: 2Gi
       requests:
         cpu: 500m
         memory: 512Mi
     mgr-sidecar:
       limits:
-        cpu: 500m
         memory: 100Mi
       requests:
         cpu: 100m
         memory: 40Mi
     mon:
       limits:
@@ -163,8 +158,9 @@

       name: hc-master-2
     - devices:
       - name: /dev/disk/by-id/dm-name-hc--worker--2--vg-ceph--disk
       name: hc-master-3
     useAllDevices: false
     useAllNodes: false
+  upgradeOSDRequiresHealthyPGs: false
   waitTimeoutForHealthyOSDInMinutes: 10
 
--- HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules

+++ HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules

@@ -261,13 +261,13 @@

         severity: warning
         type: ceph_default
     - alert: CephDeviceFailurePredictionTooHigh
       annotations:
         description: The device health module has determined that devices predicted
           to fail can not be remediated automatically, since too many OSDs would be
-          removed from the cluster to ensure performance and availabililty. Prevent
+          removed from the cluster to ensure performance and availability. Prevent
           data integrity issues by adding new OSDs so that data may be relocated.
         documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-toomany
         summary: Too many devices are predicted to fail, unable to resolve
       expr: ceph_health_detail{name="DEVICE_HEALTH_TOOMANY"} == 1
       for: 1m
       labels:
@@ -504,13 +504,13 @@

       expr: ceph_health_detail{name="PG_RECOVERY_FULL"} == 1
       for: 1m
       labels:
         oid: 1.3.6.1.4.1.50495.1.2.1.7.5
         severity: critical
         type: ceph_default
-    - alert: CephPGUnavilableBlockingIO
+    - alert: CephPGUnavailableBlockingIO
       annotations:
         description: Data availability is reduced, impacting the cluster's ability
           to service I/O. One or more placement groups (PGs) are in a state that blocks
           I/O.
         documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability
         summary: PG is unavailable, blocking I/O
@@ -626,15 +626,15 @@

       labels:
         oid: 1.3.6.1.4.1.50495.1.2.1.8.3
         severity: warning
         type: ceph_default
     - alert: CephNodeNetworkBondDegraded
       annotations:
-        summary: Degraded Bond on Node {{ $labels.instance }}
         description: Bond {{ $labels.master }} is degraded on Node {{ $labels.instance
           }}.
+        summary: Degraded Bond on Node {{ $labels.instance }}
       expr: |
         node_bonding_slaves - node_bonding_active != 0
       labels:
         severity: warning
         type: ceph_default
     - alert: CephNodeDiskspaceWarning
@@ -662,12 +662,23 @@

         > 0))  )
       labels:
         severity: warning
         type: ceph_default
   - name: pools
     rules:
+    - alert: CephPoolGrowthWarning
+      annotations:
+        description: Pool '{{ $labels.name }}' will be full in less than 5 days assuming
+          the average fill-up rate of the past 48 hours.
+        summary: Pool growth rate may soon exceed capacity
+      expr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id,
+        instance, pod) group_right() ceph_pool_metadata) >= 95
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.9.2
+        severity: warning
+        type: ceph_default
     - alert: CephPoolBackfillFull
       annotations:
         description: A pool is approaching the near full threshold, which will prevent
           recovery/backfill operations from completing. Consider adding more capacity.
         summary: Free space in a pool is too low for recovery/backfill
       expr: ceph_health_detail{name="POOL_BACKFILLFULL"} > 0
@@ -718,22 +729,113 @@

       expr: ceph_healthcheck_slow_ops > 0
       for: 30s
       labels:
         severity: warning
         type: ceph_default
     - alert: CephDaemonSlowOps
-      for: 30s
-      expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
-      labels:
-        severity: warning
-        type: ceph_default
-      annotations:
-        summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+      annotations:
         description: '{{ $labels.ceph_daemon }} operations are taking too long to
           process (complaint time exceeded)'
         documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#slow-ops
+        summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+      expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
+      for: 30s
+      labels:
+        severity: warning
+        type: ceph_default
+  - name: hardware
+    rules:
+    - alert: HardwareStorageError
+      annotations:
+        description: Some storage devices are in error. Check `ceph health detail`.
+        summary: Storage devices error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_STORAGE"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.1
+        severity: critical
+        type: ceph_default
+    - alert: HardwareMemoryError
+      annotations:
+        description: DIMM error(s) detected. Check `ceph health detail`.
+        summary: DIMM error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_MEMORY"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.2
+        severity: critical
+        type: ceph_default
+    - alert: HardwareProcessorError
+      annotations:
+        description: Processor error(s) detected. Check `ceph health detail`.
+        summary: Processor error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_PROCESSOR"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.3
+        severity: critical
+        type: ceph_default
+    - alert: HardwareNetworkError
+      annotations:
+        description: Network error(s) detected. Check `ceph health detail`.
+        summary: Network error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_NETWORK"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.4
+        severity: critical
+        type: ceph_default
+    - alert: HardwarePowerError
+      annotations:
+        description: Power supply error(s) detected. Check `ceph health detail`.
+        summary: Power supply error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_POWER"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.5
+        severity: critical
+        type: ceph_default
+    - alert: HardwareFanError
+      annotations:
+        description: Fan error(s) detected. Check `ceph health detail`.
+        summary: Fan error(s) detected
+      expr: ceph_health_detail{name="HARDWARE_FANS"} > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.13.6
+        severity: critical
+        type: ceph_default
+  - name: PrometheusServer
+    rules:
+    - alert: PrometheusJobMissing
+      annotations:
+        description: The prometheus job that scrapes from Ceph MGR is no longer defined,
+          this will effectively mean you'll have no metrics or alerts for the cluster.  Please
+          review the job definitions in the prometheus.yml file of the prometheus
+          instance.
+        summary: The scrape job for Ceph MGR is missing from Prometheus
+      expr: absent(up{job="rook-ceph-mgr"})
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+        severity: critical
+        type: ceph_default
+    - alert: PrometheusJobExporterMissing
+      annotations:
+        description: The prometheus job that scrapes from Ceph Exporter is no longer
+          defined, this will effectively mean you'll have no metrics or alerts for
+          the cluster.  Please review the job definitions in the prometheus.yml file
+          of the prometheus instance.
+        summary: The scrape job for Ceph Exporter is missing from Prometheus
+      expr: sum(absent(up{job="rook-ceph-exporter"})) and sum(ceph_osd_metadata{ceph_version=~"^ceph
+        version (1[89]|[2-9][0-9]).*"}) > 0
+      for: 30s
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+        severity: critical
+        type: ceph_default
   - name: rados
     rules:
     - alert: CephObjectMissing
       annotations:
         description: The latest version of a RADOS object can not be found, even though
           all OSDs are up. I/O requests for this object from clients will block (hang).
@@ -760,7 +862,218 @@

       expr: ceph_health_detail{name="RECENT_CRASH"} == 1
       for: 1m
       labels:
         oid: 1.3.6.1.4.1.50495.1.2.1.1.2
         severity: critical
         type: ceph_default
+  - name: rbdmirror
+    rules:
+    - alert: CephRBDMirrorImagesPerDaemonHigh
+      annotations:
+        description: Number of image replications per daemon is not supposed to go
+          beyond threshold 100
+        summary: Number of image replications are now above 100
+      expr: sum by (ceph_daemon, namespace) (ceph_rbd_mirror_snapshot_image_snapshots)
+        > 100
+      for: 1m
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.10.2
+        severity: critical
+        type: ceph_default
+    - alert: CephRBDMirrorImagesNotInSync
+      annotations:
+        description: Both local and remote RBD mirror images should be in sync.
+        summary: Some of the RBD mirror images are not in sync with the remote counter
+          parts.
+      expr: sum by (ceph_daemon, image, namespace, pool) (topk by (ceph_daemon, image,
+        namespace, pool) (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk
+        by (ceph_daemon, image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+        != 0
+      for: 1m
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.10.3
+        severity: critical
+        type: ceph_default
+    - alert: CephRBDMirrorImagesNotInSyncVeryHigh
+      annotations:
+        description: More than 10% of the images have synchronization problems
+        summary: Number of unsynchronized images are very high.
+      expr: count by (ceph_daemon) ((topk by (ceph_daemon, image, namespace, pool)
+        (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk by (ceph_daemon,
+        image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+        != 0) > (sum by (ceph_daemon) (ceph_rbd_mirror_snapshot_snapshots)*.1)
+      for: 1m
+      labels:
+        oid: 1.3.6.1.4.1.50495.1.2.1.10.4
+        severity: critical
+        type: ceph_default
+    - alert: CephRBDMirrorImageTransferBandwidthHigh
+      annotations:
[Diff truncated by flux-local]

@renovate renovate bot changed the title fix(helm): update chart rook-ceph-cluster to v1.13.4 fix(helm): update chart rook-ceph-cluster to v1.13.5 Feb 22, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from e4939e6 to 98bfd34 Compare February 22, 2024 22:31
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 98bfd34 to 524729f Compare March 7, 2024 22:56
@renovate renovate bot changed the title fix(helm): update chart rook-ceph-cluster to v1.13.5 fix(helm): update chart rook-ceph-cluster to v1.13.6 Mar 7, 2024
@renovate renovate bot changed the title fix(helm): update chart rook-ceph-cluster to v1.13.6 fix(helm): update chart rook-ceph-cluster to v1.13.7 Mar 14, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 524729f to b2d542f Compare March 14, 2024 22:47
@renovate renovate bot changed the title fix(helm): update chart rook-ceph-cluster to v1.13.7 feat(helm): update chart rook-ceph-cluster to v1.14.0 Apr 3, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from b2d542f to d428a4f Compare April 3, 2024 22:25
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.0 feat(helm): update chart rook-ceph-cluster to v1.14.1 Apr 17, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch 2 times, most recently from cc53b06 to 010a978 Compare April 19, 2024 18:40
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.1 feat(helm): update chart rook-ceph-cluster to v1.14.2 Apr 19, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 010a978 to 5d26f3e Compare May 3, 2024 21:47
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.2 feat(helm): update chart rook-ceph-cluster to v1.14.3 May 3, 2024
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.3 feat(helm): update chart rook-ceph-cluster to v1.14.4 May 17, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 5d26f3e to 0db77d1 Compare May 17, 2024 03:51
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 0db77d1 to d94cb65 Compare May 30, 2024 23:16
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.4 feat(helm): update chart rook-ceph-cluster to v1.14.5 May 30, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from d94cb65 to 46e328a Compare June 14, 2024 04:41
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.5 feat(helm): update chart rook-ceph-cluster to v1.14.6 Jun 14, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 46e328a to 94ad0cf Compare June 21, 2024 19:15
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.6 feat(helm): update chart rook-ceph-cluster to v1.14.7 Jun 21, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 94ad0cf to 9945aa7 Compare July 3, 2024 21:07
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.7 feat(helm): update chart rook-ceph-cluster to v1.14.8 Jul 3, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 9945aa7 to 224776b Compare July 25, 2024 22:16
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.8 feat(helm): update chart rook-ceph-cluster to v1.14.9 Jul 25, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 224776b to a302781 Compare August 20, 2024 23:29
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.9 feat(helm): update chart rook-ceph-cluster to v1.14.10 Aug 20, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from a302781 to b038374 Compare August 21, 2024 02:14
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.14.10 feat(helm): update chart rook-ceph-cluster to v1.15.0 Aug 21, 2024
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.0 feat(helm): update chart rook-ceph-cluster to v1.15.1 Sep 4, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from b038374 to baf7008 Compare September 4, 2024 22:22
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.1 feat(helm): update chart rook-ceph-cluster to v1.15.2 Sep 19, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from baf7008 to 5ba203e Compare September 19, 2024 21:46
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.2 feat(helm): update chart rook-ceph-cluster to v1.15.3 Oct 3, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 5ba203e to 5e94805 Compare October 3, 2024 22:38
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 5e94805 to 077a9ec Compare October 17, 2024 21:08
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.3 feat(helm): update chart rook-ceph-cluster to v1.15.4 Oct 17, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 077a9ec to 0c5ab74 Compare November 6, 2024 22:21
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.4 feat(helm): update chart rook-ceph-cluster to v1.15.5 Nov 6, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from 0c5ab74 to f9b6c43 Compare November 22, 2024 00:46
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.5 feat(helm): update chart rook-ceph-cluster to v1.15.6 Nov 22, 2024
@renovate renovate bot force-pushed the renovate/rook-ceph-cluster-1.x branch from f9b6c43 to b251c2e Compare December 17, 2024 20:47
@renovate renovate bot changed the title feat(helm): update chart rook-ceph-cluster to v1.15.6 feat(helm): update chart rook-ceph-cluster to v1.16.0 Dec 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants