-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(helm): update chart rook-ceph-cluster to v1.16.0 #142
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/rook-ceph-cluster-1.x
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
+++ kubernetes/apps/rook-ceph/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
@@ -13,13 +13,13 @@
spec:
chart: rook-ceph-cluster
sourceRef:
kind: HelmRepository
name: rook-ceph
namespace: flux-system
- version: v1.13.3
+ version: v1.16.0
dependsOn:
- name: rook-ceph-operator
namespace: rook-ceph
- name: snapshot-controller
namespace: storage
install: |
--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
@@ -17,13 +17,13 @@
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: rook-ceph-tools
- image: quay.io/ceph/ceph:v18.2.1
+ image: quay.io/ceph/ceph:v19.2.0
command:
- /bin/bash
- -c
- |
# Replicate the script from toolbox.sh inline so the ceph image
# can be run directly, instead of requiring the rook toolbox
@@ -101,24 +101,24 @@
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-username
resources:
limits:
- cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: mon-endpoint-volume
mountPath: /etc/rook
- name: ceph-admin-secret
mountPath: /var/lib/rook-ceph-mon
+ serviceAccountName: rook-ceph-default
volumes:
- name: ceph-admin-secret
secret:
secretName: rook-ceph-mon
optional: false
items:
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
@@ -6,13 +6,13 @@
namespace: rook-ceph
spec:
monitoring:
enabled: true
cephVersion:
allowUnsupported: false
- image: quay.io/ceph/ceph:v18.2.1
+ image: quay.io/ceph/ceph:v19.2.0
cleanupPolicy:
allowUninstallWithVolumes: false
confirmation: ''
sanitizeDisks:
dataSource: zero
iteration: 1
@@ -87,34 +87,30 @@
mon: system-node-critical
osd: system-node-critical
removeOSDsIfOutAndSafeToRemove: false
resources:
cleanup:
limits:
- cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 100Mi
crashcollector:
limits:
- cpu: 500m
memory: 60Mi
requests:
cpu: 100m
memory: 60Mi
exporter:
limits:
- cpu: 250m
memory: 128Mi
requests:
cpu: 50m
memory: 50Mi
logcollector:
limits:
- cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 100Mi
mgr:
limits:
@@ -122,13 +118,12 @@
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
mgr-sidecar:
limits:
- cpu: 500m
memory: 100Mi
requests:
cpu: 100m
memory: 40Mi
mon:
limits:
@@ -163,8 +158,9 @@
name: hc-master-2
- devices:
- name: /dev/disk/by-id/dm-name-hc--worker--2--vg-ceph--disk
name: hc-master-3
useAllDevices: false
useAllNodes: false
+ upgradeOSDRequiresHealthyPGs: false
waitTimeoutForHealthyOSDInMinutes: 10
--- HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
+++ HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
@@ -261,13 +261,13 @@
severity: warning
type: ceph_default
- alert: CephDeviceFailurePredictionTooHigh
annotations:
description: The device health module has determined that devices predicted
to fail can not be remediated automatically, since too many OSDs would be
- removed from the cluster to ensure performance and availabililty. Prevent
+ removed from the cluster to ensure performance and availability. Prevent
data integrity issues by adding new OSDs so that data may be relocated.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#device-health-toomany
summary: Too many devices are predicted to fail, unable to resolve
expr: ceph_health_detail{name="DEVICE_HEALTH_TOOMANY"} == 1
for: 1m
labels:
@@ -504,13 +504,13 @@
expr: ceph_health_detail{name="PG_RECOVERY_FULL"} == 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.7.5
severity: critical
type: ceph_default
- - alert: CephPGUnavilableBlockingIO
+ - alert: CephPGUnavailableBlockingIO
annotations:
description: Data availability is reduced, impacting the cluster's ability
to service I/O. One or more placement groups (PGs) are in a state that blocks
I/O.
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#pg-availability
summary: PG is unavailable, blocking I/O
@@ -626,15 +626,15 @@
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.8.3
severity: warning
type: ceph_default
- alert: CephNodeNetworkBondDegraded
annotations:
- summary: Degraded Bond on Node {{ $labels.instance }}
description: Bond {{ $labels.master }} is degraded on Node {{ $labels.instance
}}.
+ summary: Degraded Bond on Node {{ $labels.instance }}
expr: |
node_bonding_slaves - node_bonding_active != 0
labels:
severity: warning
type: ceph_default
- alert: CephNodeDiskspaceWarning
@@ -662,12 +662,23 @@
> 0)) )
labels:
severity: warning
type: ceph_default
- name: pools
rules:
+ - alert: CephPoolGrowthWarning
+ annotations:
+ description: Pool '{{ $labels.name }}' will be full in less than 5 days assuming
+ the average fill-up rate of the past 48 hours.
+ summary: Pool growth rate may soon exceed capacity
+ expr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id,
+ instance, pod) group_right() ceph_pool_metadata) >= 95
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.9.2
+ severity: warning
+ type: ceph_default
- alert: CephPoolBackfillFull
annotations:
description: A pool is approaching the near full threshold, which will prevent
recovery/backfill operations from completing. Consider adding more capacity.
summary: Free space in a pool is too low for recovery/backfill
expr: ceph_health_detail{name="POOL_BACKFILLFULL"} > 0
@@ -718,22 +729,113 @@
expr: ceph_healthcheck_slow_ops > 0
for: 30s
labels:
severity: warning
type: ceph_default
- alert: CephDaemonSlowOps
- for: 30s
- expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
- labels:
- severity: warning
- type: ceph_default
- annotations:
- summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+ annotations:
description: '{{ $labels.ceph_daemon }} operations are taking too long to
process (complaint time exceeded)'
documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#slow-ops
+ summary: '{{ $labels.ceph_daemon }} operations are slow to complete'
+ expr: ceph_daemon_health_metrics{type="SLOW_OPS"} > 0
+ for: 30s
+ labels:
+ severity: warning
+ type: ceph_default
+ - name: hardware
+ rules:
+ - alert: HardwareStorageError
+ annotations:
+ description: Some storage devices are in error. Check `ceph health detail`.
+ summary: Storage devices error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_STORAGE"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.1
+ severity: critical
+ type: ceph_default
+ - alert: HardwareMemoryError
+ annotations:
+ description: DIMM error(s) detected. Check `ceph health detail`.
+ summary: DIMM error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_MEMORY"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.2
+ severity: critical
+ type: ceph_default
+ - alert: HardwareProcessorError
+ annotations:
+ description: Processor error(s) detected. Check `ceph health detail`.
+ summary: Processor error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_PROCESSOR"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.3
+ severity: critical
+ type: ceph_default
+ - alert: HardwareNetworkError
+ annotations:
+ description: Network error(s) detected. Check `ceph health detail`.
+ summary: Network error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_NETWORK"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.4
+ severity: critical
+ type: ceph_default
+ - alert: HardwarePowerError
+ annotations:
+ description: Power supply error(s) detected. Check `ceph health detail`.
+ summary: Power supply error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_POWER"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.5
+ severity: critical
+ type: ceph_default
+ - alert: HardwareFanError
+ annotations:
+ description: Fan error(s) detected. Check `ceph health detail`.
+ summary: Fan error(s) detected
+ expr: ceph_health_detail{name="HARDWARE_FANS"} > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.13.6
+ severity: critical
+ type: ceph_default
+ - name: PrometheusServer
+ rules:
+ - alert: PrometheusJobMissing
+ annotations:
+ description: The prometheus job that scrapes from Ceph MGR is no longer defined,
+ this will effectively mean you'll have no metrics or alerts for the cluster. Please
+ review the job definitions in the prometheus.yml file of the prometheus
+ instance.
+ summary: The scrape job for Ceph MGR is missing from Prometheus
+ expr: absent(up{job="rook-ceph-mgr"})
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+ severity: critical
+ type: ceph_default
+ - alert: PrometheusJobExporterMissing
+ annotations:
+ description: The prometheus job that scrapes from Ceph Exporter is no longer
+ defined, this will effectively mean you'll have no metrics or alerts for
+ the cluster. Please review the job definitions in the prometheus.yml file
+ of the prometheus instance.
+ summary: The scrape job for Ceph Exporter is missing from Prometheus
+ expr: sum(absent(up{job="rook-ceph-exporter"})) and sum(ceph_osd_metadata{ceph_version=~"^ceph
+ version (1[89]|[2-9][0-9]).*"}) > 0
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.12.1
+ severity: critical
+ type: ceph_default
- name: rados
rules:
- alert: CephObjectMissing
annotations:
description: The latest version of a RADOS object can not be found, even though
all OSDs are up. I/O requests for this object from clients will block (hang).
@@ -760,7 +862,218 @@
expr: ceph_health_detail{name="RECENT_CRASH"} == 1
for: 1m
labels:
oid: 1.3.6.1.4.1.50495.1.2.1.1.2
severity: critical
type: ceph_default
+ - name: rbdmirror
+ rules:
+ - alert: CephRBDMirrorImagesPerDaemonHigh
+ annotations:
+ description: Number of image replications per daemon is not supposed to go
+ beyond threshold 100
+ summary: Number of image replications are now above 100
+ expr: sum by (ceph_daemon, namespace) (ceph_rbd_mirror_snapshot_image_snapshots)
+ > 100
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.2
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImagesNotInSync
+ annotations:
+ description: Both local and remote RBD mirror images should be in sync.
+ summary: Some of the RBD mirror images are not in sync with the remote counter
+ parts.
+ expr: sum by (ceph_daemon, image, namespace, pool) (topk by (ceph_daemon, image,
+ namespace, pool) (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk
+ by (ceph_daemon, image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+ != 0
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.3
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImagesNotInSyncVeryHigh
+ annotations:
+ description: More than 10% of the images have synchronization problems
+ summary: Number of unsynchronized images are very high.
+ expr: count by (ceph_daemon) ((topk by (ceph_daemon, image, namespace, pool)
+ (1, ceph_rbd_mirror_snapshot_image_local_timestamp) - topk by (ceph_daemon,
+ image, namespace, pool) (1, ceph_rbd_mirror_snapshot_image_remote_timestamp))
+ != 0) > (sum by (ceph_daemon) (ceph_rbd_mirror_snapshot_snapshots)*.1)
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.10.4
+ severity: critical
+ type: ceph_default
+ - alert: CephRBDMirrorImageTransferBandwidthHigh
+ annotations:
[Diff truncated by flux-local] |
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.13.4
fix(helm): update chart rook-ceph-cluster to v1.13.5
Feb 22, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
February 22, 2024 22:31
e4939e6
to
98bfd34
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
March 7, 2024 22:56
98bfd34
to
524729f
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.13.5
fix(helm): update chart rook-ceph-cluster to v1.13.6
Mar 7, 2024
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.13.6
fix(helm): update chart rook-ceph-cluster to v1.13.7
Mar 14, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
March 14, 2024 22:47
524729f
to
b2d542f
Compare
renovate
bot
changed the title
fix(helm): update chart rook-ceph-cluster to v1.13.7
feat(helm): update chart rook-ceph-cluster to v1.14.0
Apr 3, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
April 3, 2024 22:25
b2d542f
to
d428a4f
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.0
feat(helm): update chart rook-ceph-cluster to v1.14.1
Apr 17, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
2 times, most recently
from
April 19, 2024 18:40
cc53b06
to
010a978
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.1
feat(helm): update chart rook-ceph-cluster to v1.14.2
Apr 19, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
May 3, 2024 21:47
010a978
to
5d26f3e
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.2
feat(helm): update chart rook-ceph-cluster to v1.14.3
May 3, 2024
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.3
feat(helm): update chart rook-ceph-cluster to v1.14.4
May 17, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
May 17, 2024 03:51
5d26f3e
to
0db77d1
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
May 30, 2024 23:16
0db77d1
to
d94cb65
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.4
feat(helm): update chart rook-ceph-cluster to v1.14.5
May 30, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
June 14, 2024 04:41
d94cb65
to
46e328a
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.5
feat(helm): update chart rook-ceph-cluster to v1.14.6
Jun 14, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
June 21, 2024 19:15
46e328a
to
94ad0cf
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.6
feat(helm): update chart rook-ceph-cluster to v1.14.7
Jun 21, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
July 3, 2024 21:07
94ad0cf
to
9945aa7
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.7
feat(helm): update chart rook-ceph-cluster to v1.14.8
Jul 3, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
July 25, 2024 22:16
9945aa7
to
224776b
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.8
feat(helm): update chart rook-ceph-cluster to v1.14.9
Jul 25, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
August 20, 2024 23:29
224776b
to
a302781
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.9
feat(helm): update chart rook-ceph-cluster to v1.14.10
Aug 20, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
August 21, 2024 02:14
a302781
to
b038374
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.14.10
feat(helm): update chart rook-ceph-cluster to v1.15.0
Aug 21, 2024
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.0
feat(helm): update chart rook-ceph-cluster to v1.15.1
Sep 4, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
September 4, 2024 22:22
b038374
to
baf7008
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.1
feat(helm): update chart rook-ceph-cluster to v1.15.2
Sep 19, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
September 19, 2024 21:46
baf7008
to
5ba203e
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.2
feat(helm): update chart rook-ceph-cluster to v1.15.3
Oct 3, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
October 3, 2024 22:38
5ba203e
to
5e94805
Compare
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
October 17, 2024 21:08
5e94805
to
077a9ec
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.3
feat(helm): update chart rook-ceph-cluster to v1.15.4
Oct 17, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
November 6, 2024 22:21
077a9ec
to
0c5ab74
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.4
feat(helm): update chart rook-ceph-cluster to v1.15.5
Nov 6, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
November 22, 2024 00:46
0c5ab74
to
f9b6c43
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.5
feat(helm): update chart rook-ceph-cluster to v1.15.6
Nov 22, 2024
renovate
bot
force-pushed
the
renovate/rook-ceph-cluster-1.x
branch
from
December 17, 2024 20:47
f9b6c43
to
b251c2e
Compare
renovate
bot
changed the title
feat(helm): update chart rook-ceph-cluster to v1.15.6
feat(helm): update chart rook-ceph-cluster to v1.16.0
Dec 17, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.13.3
->v1.16.0
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
rook/rook (rook-ceph-cluster)
v1.16.0
Compare Source
v1.15.7
Compare Source
Improvements
Rook v1.15.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.6
Compare Source
Improvements
Rook v1.15.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.5
Compare Source
Improvements
Rook v1.15.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
/run/udev
in the init container for ceph-volume activate (#14901, @guits)v1.15.4
Compare Source
Improvements
Rook v1.15.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.3
Compare Source
Improvements
Rook v1.15.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.2
Compare Source
Improvements
Rook v1.15.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.15.1
Compare Source
Improvements
Rook v1.15.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
mon.zones
spec (#14636, @BenoitKnecht)v1.15.0
Compare Source
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
csi-*plugin-holder-*
in the Rook operator namespace, see the detailed documentation to disable them. This deprecation process will be required before upgrading to the future Rook v1.16.spec.hosting
configurations are set. Use the newspec.hosting.advertiseEndpoint
config to define required behavior as documented.Features
allowDeviceClassUpdate: true
is set in the CephCluster CR.allowOsdCrushWeightUpdate: true
is set in the CephCluster CR.docker.io/rook/ceph
) in operator manifests and helm charts.Experimental Features
operator.yaml
.v1.14.12
Compare Source
Improvements
Rook v1.14.12 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.11
Compare Source
Improvements
Rook v1.14.11 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.10
Compare Source
Improvements
Rook v1.14.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.9
Compare Source
Improvements
Rook v1.14.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.8
Compare Source
Improvements
Rook v1.14.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.7
Compare Source
What's Changed
monitoring: fix CephPoolGrowthWarning expression (#14346, @matofeder)
monitoring: Set honor labels on the service monitor (#14339, @travisn)
Full Changelog: rook/rook@v1.14.6...v1.14.7
v1.14.6
Compare Source
What's Changed
v1.14.5
Compare Source
Improvements
Rook v1.14.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.4
Compare Source
Improvements
Rook v1.14.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.3
Compare Source
Improvements
Rook v1.14.3 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.2
Compare Source
Improvements
Rook v1.14.2 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.1
Compare Source
Improvements
Rook v1.14.1 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.14.0
Compare Source
Upgrade Guide
To upgrade from previous versions of Rook, see the Rook upgrade guide.
Breaking Changes
repository
andtag
settings are specified separately in the helm chart values.yaml for the CSI images. Helm users previously specifying the CSI images with theimage
setting will need to update their values.yaml with the separaterepository
andtag
settings.csi-*plugin-holder-*
in the Rook operator namespace, see the holder pod deprecation documentation to disable them. Migration of affected clusters is optional for v1.14, but will be required in a future release.CSI_ENABLE_READ_AFFINITY
was removed. v1.13 clusters that have modified this value to be"true"
must set the option as desired in each CephCluster as documented here before upgrading to v1.14.Features
default
service account now use a newrook-ceph-default
service account.application
can be applied to a CephBlockPool CR.rook-ceph
namespace).kubectl
output for Rook CRDs.v1.13.10
Compare Source
Improvements
Rook v1.13.10 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.13.9
Compare Source
Improvements
Rook v1.13.9 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.13.8
Compare Source
Improvements
Rook v1.13.8 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.13.7
Compare Source
Improvements
Rook v1.13.7 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
monitoring
section of CephCluster to ceph-exporter (#13902, @rkachach)v1.13.6
Compare Source
Improvements
Rook v1.13.6 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
master
tag in the values.yaml with the release tag (#13897, @travisn)v1.13.5
Compare Source
Improvements
Rook v1.13.5 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
v1.13.4
Compare Source
Improvements
Rook v1.13.4 is a patch release limited in scope and focusing on feature additions and bug fixes to the Ceph operator.
Configuration
📅 Schedule: Branch creation - "on saturday" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.