generated from onedr0p/cluster-template
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deploy rook-ceph #5
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/csi-driver-nfs
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/csi-driver-nfs
@@ -1,13 +0,0 @@
----
-apiVersion: source.toolkit.fluxcd.io/v1beta2
-kind: HelmRepository
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: csi-driver-nfs
- namespace: flux-system
-spec:
- interval: 1h
- url: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
-
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/csi-driver-smb
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/csi-driver-smb
@@ -1,13 +0,0 @@
----
-apiVersion: source.toolkit.fluxcd.io/v1beta2
-kind: HelmRepository
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: csi-driver-smb
- namespace: flux-system
-spec:
- interval: 1h
- url: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
-
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/k8s-gateway
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/k8s-gateway
@@ -1,13 +0,0 @@
----
-apiVersion: source.toolkit.fluxcd.io/v1beta2
-kind: HelmRepository
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: k8s-gateway
- namespace: flux-system
-spec:
- interval: 1h
- url: https://ori-edge.github.io/k8s_gateway
-
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/longhorn
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/longhorn
@@ -1,13 +0,0 @@
----
-apiVersion: source.toolkit.fluxcd.io/v1beta2
-kind: HelmRepository
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: longhorn
- namespace: flux-system
-spec:
- interval: 1h
- url: https://charts.longhorn.io
-
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/weave-gitops
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/weave-gitops
@@ -1,14 +0,0 @@
----
-apiVersion: source.toolkit.fluxcd.io/v1beta2
-kind: HelmRepository
-metadata:
- labels:
- kustomize.toolkit.fluxcd.io/name: cluster
- kustomize.toolkit.fluxcd.io/namespace: flux-system
- name: weave-gitops
- namespace: flux-system
-spec:
- interval: 5m
- type: oci
- url: oci://ghcr.io/weaveworks/charts
-
--- kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/rook-ceph
+++ kubernetes/flux Kustomization: flux-system/cluster HelmRepository: flux-system/rook-ceph
@@ -0,0 +1,14 @@
+---
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: HelmRepository
+metadata:
+ labels:
+ kustomize.toolkit.fluxcd.io/name: cluster
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ name: rook-ceph
+ namespace: flux-system
+spec:
+ interval: 1h
+ timeout: 3m
+ url: https://charts.rook.io/release
+
--- kubernetes/apps Kustomization: flux-system/cluster-apps Namespace: flux-system/rook-ceph
+++ kubernetes/apps Kustomization: flux-system/cluster-apps Namespace: flux-system/rook-ceph
@@ -0,0 +1,10 @@
+---
+apiVersion: v1
+kind: Namespace
+metadata:
+ labels:
+ kustomize.toolkit.fluxcd.io/name: cluster-apps
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ kustomize.toolkit.fluxcd.io/prune: disabled
+ name: rook-ceph
+
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/rook-ceph
+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/rook-ceph
@@ -0,0 +1,40 @@
+---
+apiVersion: kustomize.toolkit.fluxcd.io/v1
+kind: Kustomization
+metadata:
+ labels:
+ kustomize.toolkit.fluxcd.io/name: cluster-apps
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ name: rook-ceph
+ namespace: flux-system
+spec:
+ commonMetadata:
+ labels:
+ app.kubernetes.io/name: rook-ceph
+ decryption:
+ provider: sops
+ secretRef:
+ name: sops-age
+ interval: 30m
+ path: ./kubernetes/apps/rook-ceph/app
+ postBuild:
+ substituteFrom:
+ - kind: ConfigMap
+ name: cluster-settings
+ - kind: Secret
+ name: cluster-secrets
+ - kind: ConfigMap
+ name: cluster-settings-user
+ optional: true
+ - kind: Secret
+ name: cluster-secrets-user
+ optional: true
+ prune: false
+ retryInterval: 5m30s
+ sourceRef:
+ kind: GitRepository
+ name: home-kubernetes
+ targetNamespace: rook-ceph
+ timeout: 5m
+ wait: false
+
--- kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/rook-ceph-cluster
+++ kubernetes/apps Kustomization: flux-system/cluster-apps Kustomization: flux-system/rook-ceph-cluster
@@ -0,0 +1,40 @@
+---
+apiVersion: kustomize.toolkit.fluxcd.io/v1
+kind: Kustomization
+metadata:
+ labels:
+ kustomize.toolkit.fluxcd.io/name: cluster-apps
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ name: rook-ceph-cluster
+ namespace: flux-system
+spec:
+ commonMetadata:
+ labels:
+ app.kubernetes.io/name: rook-ceph-cluster
+ decryption:
+ provider: sops
+ secretRef:
+ name: sops-age
+ interval: 30m
+ path: ./kubernetes/apps/rook-ceph/cluster
+ postBuild:
+ substituteFrom:
+ - kind: ConfigMap
+ name: cluster-settings
+ - kind: Secret
+ name: cluster-secrets
+ - kind: ConfigMap
+ name: cluster-settings-user
+ optional: true
+ - kind: Secret
+ name: cluster-secrets-user
+ optional: true
+ prune: false
+ retryInterval: 5m30s
+ sourceRef:
+ kind: GitRepository
+ name: home-kubernetes
+ targetNamespace: rook-ceph
+ timeout: 15m
+ wait: false
+
--- kubernetes/apps/rook-ceph/app Kustomization: flux-system/rook-ceph HelmRelease: rook-ceph/rook-ceph-operator
+++ kubernetes/apps/rook-ceph/app Kustomization: flux-system/rook-ceph HelmRelease: rook-ceph/rook-ceph-operator
@@ -0,0 +1,47 @@
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta2
+kind: HelmRelease
+metadata:
+ labels:
+ app.kubernetes.io/name: rook-ceph
+ kustomize.toolkit.fluxcd.io/name: rook-ceph
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ name: rook-ceph-operator
+ namespace: rook-ceph
+spec:
+ chart:
+ spec:
+ chart: rook-ceph
+ sourceRef:
+ kind: HelmRepository
+ name: rook-ceph
+ namespace: flux-system
+ version: v1.13.3
+ dependsOn:
+ - name: snapshot-controller
+ namespace: storage
+ install:
+ remediation:
+ retries: 3
+ interval: 30m
+ timeout: 15m
+ uninstall:
+ keepHistory: false
+ upgrade:
+ cleanupOnFail: true
+ remediation:
+ retries: 3
+ values:
+ csi:
+ cephFSKernelMountOptions: ms_mode=prefer-crc
+ enableLiveness: true
+ serviceMonitor:
+ enabled: true
+ monitoring:
+ enabled: true
+ resources:
+ limits: {}
+ requests:
+ cpu: 100m
+ memory: 128Mi
+
--- kubernetes/apps/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
+++ kubernetes/apps/rook-ceph/cluster Kustomization: flux-system/rook-ceph-cluster HelmRelease: rook-ceph/rook-ceph-cluster
@@ -0,0 +1,146 @@
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta2
+kind: HelmRelease
+metadata:
+ labels:
+ app.kubernetes.io/name: rook-ceph-cluster
+ kustomize.toolkit.fluxcd.io/name: rook-ceph-cluster
+ kustomize.toolkit.fluxcd.io/namespace: flux-system
+ name: rook-ceph-cluster
+ namespace: rook-ceph
+spec:
+ chart:
+ spec:
+ chart: rook-ceph-cluster
+ sourceRef:
+ kind: HelmRepository
+ name: rook-ceph
+ namespace: flux-system
+ version: v1.13.3
+ dependsOn:
+ - name: rook-ceph-operator
+ namespace: rook-ceph
+ - name: snapshot-controller
+ namespace: storage
+ install:
+ remediation:
+ retries: 3
+ interval: 30m
+ timeout: 15m
+ uninstall:
+ keepHistory: false
+ upgrade:
+ cleanupOnFail: true
+ remediation:
+ retries: 3
+ values:
+ cephBlockPools:
+ - name: ceph-blockpool
+ spec:
+ failureDomain: host
+ replicated:
+ size: 3
+ storageClass:
+ allowVolumeExpansion: true
+ enabled: true
+ isDefault: true
+ name: ceph-block
+ parameters:
+ csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
+ csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
+ csi.storage.k8s.io/fstype: ext4
+ csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
+ csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
+ csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
+ csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
+ imageFeatures: layering
+ imageFormat: '2'
+ reclaimPolicy: Delete
+ cephBlockPoolsVolumeSnapshotClass:
+ deletionPolicy: Delete
+ enabled: true
+ isDefault: false
+ name: csi-ceph-blockpool
+ cephClusterSpec:
+ crashCollector:
+ disable: false
+ dashboard:
+ enabled: true
+ ssl: false
+ urlPrefix: /
+ network:
+ connections:
+ requireMsgr2: true
+ provider: host
+ placement:
+ mgr:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-role.kubernetes.io/control-plane
+ operator: Exists
+ mon:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-role.kubernetes.io/control-plane
+ operator: Exists
+ resources:
+ mgr:
+ limits:
+ cpu: 2000m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 512Mi
+ mon:
+ limits:
+ cpu: 4000m
+ memory: 4Gi
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ osd:
+ limits:
+ cpu: 4000m
+ memory: 8Gi
+ requests:
+ cpu: 1000m
+ memory: 4Gi
+ storage:
+ config:
+ osdsPerDevice: '1'
+ nodes:
+ - devices:
+ - name: /dev/sda
+ name: odroid-01
+ - devices:
+ - name: /dev/sda
+ name: odroid-02
+ - devices:
+ - name: /dev/sda
+ name: odroid-03
+ useAllDevices: false
+ useAllNodes: false
+ configOverride: |
+ [global]
+ bdev_enable_discard = true
+ bdev_async_discard = true
+ osd_class_update_on_start = false
+ ingress:
+ dashboard:
+ host:
+ name: rook.${SECRET_DOMAIN}
+ path: /
+ ingressClassName: internal
+ tls:
+ - hosts:
+ - rook.${SECRET_DOMAIN}
+ monitoring:
+ createPrometheusRules: true
+ enabled: true
+ toolbox:
+ enabled: true
+ |
was getting errors from flux diff
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-osd
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-osd
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-mgr
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-mgr
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-cmd-reporter
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-cmd-reporter
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-cmd-reporter
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-purge-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-purge-osd
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-purge-osd
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-rgw
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-rgw
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-rgw
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-system
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-ceph-system
@@ -0,0 +1,13 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-ceph-system
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-cephfs-plugin-sa
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-cephfs-plugin-sa
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-csi-cephfs-plugin-sa
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-cephfs-provisioner-sa
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-cephfs-provisioner-sa
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-csi-cephfs-provisioner-sa
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-rbd-plugin-sa
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-rbd-plugin-sa
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-csi-rbd-plugin-sa
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-rbd-provisioner-sa
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/rook-csi-rbd-provisioner-sa
@@ -0,0 +1,7 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: rook-csi-rbd-provisioner-sa
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/objectstorage-provisioner
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceAccount: rook-ceph/objectstorage-provisioner
@@ -0,0 +1,11 @@
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: objectstorage-provisioner
+ namespace: rook-ceph
+ labels:
+ app.kubernetes.io/part-of: container-object-storage-interface
+ app.kubernetes.io/component: driver-ceph
+ app.kubernetes.io/name: cosi-driver-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ConfigMap: rook-ceph/rook-ceph-operator-config
+++ HelmRelease: rook-ceph/rook-ceph-operator ConfigMap: rook-ceph/rook-ceph-operator-config
@@ -0,0 +1,240 @@
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: rook-ceph-operator-config
+ namespace: rook-ceph
+data:
+ ROOK_LOG_LEVEL: INFO
+ ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: '15'
+ ROOK_OBC_WATCH_OPERATOR_NAMESPACE: 'true'
+ ROOK_CEPH_ALLOW_LOOP_DEVICES: 'false'
+ ROOK_ENABLE_DISCOVERY_DAEMON: 'false'
+ ROOK_CSI_ENABLE_RBD: 'true'
+ ROOK_CSI_ENABLE_CEPHFS: 'true'
+ CSI_ENABLE_CEPHFS_SNAPSHOTTER: 'true'
+ CSI_ENABLE_NFS_SNAPSHOTTER: 'true'
+ CSI_ENABLE_RBD_SNAPSHOTTER: 'true'
+ CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: 'false'
+ CSI_ENABLE_ENCRYPTION: 'false'
+ CSI_ENABLE_OMAP_GENERATOR: 'false'
+ CSI_ENABLE_HOST_NETWORK: 'true'
+ CSI_ENABLE_METADATA: 'false'
+ CSI_PLUGIN_PRIORITY_CLASSNAME: system-node-critical
+ CSI_PROVISIONER_PRIORITY_CLASSNAME: system-cluster-critical
+ CSI_RBD_FSGROUPPOLICY: File
+ CSI_CEPHFS_FSGROUPPOLICY: File
+ CSI_NFS_FSGROUPPOLICY: File
+ CSI_CEPHFS_KERNEL_MOUNT_OPTIONS: ms_mode=prefer-crc
+ ROOK_CSI_IMAGE_PULL_POLICY: IfNotPresent
+ CSI_ENABLE_CSIADDONS: 'false'
+ ROOK_CSIADDONS_IMAGE: quay.io/csiaddons/k8s-sidecar:v0.8.0
+ CSI_ENABLE_TOPOLOGY: 'false'
+ CSI_ENABLE_READ_AFFINITY: 'false'
+ ROOK_CSI_ENABLE_NFS: 'false'
+ CSI_ENABLE_LIVENESS: 'true'
+ CSI_FORCE_CEPHFS_KERNEL_CLIENT: 'true'
+ CSI_GRPC_TIMEOUT_SECONDS: '150'
+ CSI_PROVISIONER_REPLICAS: '2'
+ CSI_RBD_PROVISIONER_RESOURCE: |
+ - name : csi-provisioner
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-resizer
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-attacher
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-snapshotter
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-rbdplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : csi-omap-generator
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : liveness-prometheus
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ CSI_RBD_PLUGIN_RESOURCE: |
+ - name : driver-registrar
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ - name : csi-rbdplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : liveness-prometheus
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ CSI_CEPHFS_PROVISIONER_RESOURCE: |
+ - name : csi-provisioner
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-resizer
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-attacher
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-snapshotter
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-cephfsplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : liveness-prometheus
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ CSI_CEPHFS_PLUGIN_RESOURCE: |
+ - name : driver-registrar
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ - name : csi-cephfsplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : liveness-prometheus
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ CSI_NFS_PROVISIONER_RESOURCE: |
+ - name : csi-provisioner
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 100m
+ limits:
+ memory: 256Mi
+ cpu: 200m
+ - name : csi-nfsplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ - name : csi-attacher
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ CSI_NFS_PLUGIN_RESOURCE: |
+ - name : driver-registrar
+ resource:
+ requests:
+ memory: 128Mi
+ cpu: 50m
+ limits:
+ memory: 256Mi
+ cpu: 100m
+ - name : csi-nfsplugin
+ resource:
+ requests:
+ memory: 512Mi
+ cpu: 250m
+ limits:
+ memory: 1Gi
+ cpu: 500m
+ CSI_CEPHFS_ATTACH_REQUIRED: 'true'
+ CSI_RBD_ATTACH_REQUIRED: 'true'
+ CSI_NFS_ATTACH_REQUIRED: 'true'
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-system
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-system
@@ -0,0 +1,44 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-system
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - pods/log
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - ''
+ resources:
+ - pods/exec
+ verbs:
+ - create
+- apiGroups:
+ - csiaddons.openshift.io
+ resources:
+ - networkfences
+ verbs:
+ - create
+ - get
+ - update
+ - delete
+ - watch
+ - list
+- apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions
+ verbs:
+ - get
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-cluster-mgmt
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-cluster-mgmt
@@ -0,0 +1,33 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: rook-ceph-cluster-mgmt
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ - apps
+ - extensions
+ resources:
+ - secrets
+ - pods
+ - pods/log
+ - services
+ - configmaps
+ - deployments
+ - daemonsets
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - create
+ - update
+ - delete
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-global
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-global
@@ -0,0 +1,188 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: rook-ceph-global
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - nodes
+ - nodes/proxy
+ - secrets
+ - configmaps
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - events
+ - persistentvolumes
+ - persistentvolumeclaims
+ - endpoints
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - batch
+ resources:
+ - jobs
+ - cronjobs
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - deletecollection
+- apiGroups:
+ - ceph.rook.io
+ resources:
+ - cephclients
+ - cephclusters
+ - cephblockpools
+ - cephfilesystems
+ - cephnfses
+ - cephobjectstores
+ - cephobjectstoreusers
+ - cephobjectrealms
+ - cephobjectzonegroups
+ - cephobjectzones
+ - cephbuckettopics
+ - cephbucketnotifications
+ - cephrbdmirrors
+ - cephfilesystemmirrors
+ - cephfilesystemsubvolumegroups
+ - cephblockpoolradosnamespaces
+ - cephcosidrivers
+ verbs:
+ - get
+ - list
+ - watch
+ - update
+- apiGroups:
+ - ceph.rook.io
+ resources:
+ - cephclients/status
+ - cephclusters/status
+ - cephblockpools/status
+ - cephfilesystems/status
+ - cephnfses/status
+ - cephobjectstores/status
+ - cephobjectstoreusers/status
+ - cephobjectrealms/status
+ - cephobjectzonegroups/status
+ - cephobjectzones/status
+ - cephbuckettopics/status
+ - cephbucketnotifications/status
+ - cephrbdmirrors/status
+ - cephfilesystemmirrors/status
+ - cephfilesystemsubvolumegroups/status
+ - cephblockpoolradosnamespaces/status
+ verbs:
+ - update
+- apiGroups:
+ - ceph.rook.io
+ resources:
+ - cephclients/finalizers
+ - cephclusters/finalizers
+ - cephblockpools/finalizers
+ - cephfilesystems/finalizers
+ - cephnfses/finalizers
+ - cephobjectstores/finalizers
+ - cephobjectstoreusers/finalizers
+ - cephobjectrealms/finalizers
+ - cephobjectzonegroups/finalizers
+ - cephobjectzones/finalizers
+ - cephbuckettopics/finalizers
+ - cephbucketnotifications/finalizers
+ - cephrbdmirrors/finalizers
+ - cephfilesystemmirrors/finalizers
+ - cephfilesystemsubvolumegroups/finalizers
+ - cephblockpoolradosnamespaces/finalizers
+ verbs:
+ - update
+- apiGroups:
+ - policy
+ - apps
+ - extensions
+ resources:
+ - poddisruptionbudgets
+ - deployments
+ - replicasets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - deletecollection
+- apiGroups:
+ - apps
+ resources:
+ - deployments/finalizers
+ verbs:
+ - update
+- apiGroups:
+ - healthchecking.openshift.io
+ resources:
+ - machinedisruptionbudgets
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - machine.openshift.io
+ resources:
+ - machines
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - csidrivers
+ verbs:
+ - create
+ - delete
+ - get
+ - update
+- apiGroups:
+ - k8s.cni.cncf.io
+ resources:
+ - network-attachment-definitions
+ verbs:
+ - get
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-mgr-cluster
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-mgr-cluster
@@ -0,0 +1,42 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr-cluster
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ - nodes
+ - nodes/proxy
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - list
+ - get
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - watch
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-mgr-system
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-mgr-system
@@ -0,0 +1,15 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr-system
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+ - list
+ - watch
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-object-bucket
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-object-bucket
@@ -0,0 +1,63 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-object-bucket
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ - configmaps
+ verbs:
+ - get
+ - create
+ - update
+ - delete
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+- apiGroups:
+ - objectbucket.io
+ resources:
+ - objectbucketclaims
+ verbs:
+ - list
+ - watch
+ - get
+ - update
+- apiGroups:
+ - objectbucket.io
+ resources:
+ - objectbuckets
+ verbs:
+ - list
+ - watch
+ - get
+ - create
+ - update
+ - delete
+- apiGroups:
+ - objectbucket.io
+ resources:
+ - objectbucketclaims/status
+ - objectbuckets/status
+ verbs:
+ - update
+- apiGroups:
+ - objectbucket.io
+ resources:
+ - objectbucketclaims/finalizers
+ - objectbuckets/finalizers
+ verbs:
+ - update
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rook-ceph-osd
@@ -0,0 +1,14 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-osd
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+ - list
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/cephfs-csi-nodeplugin
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/cephfs-csi-nodeplugin
@@ -0,0 +1,13 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-csi-nodeplugin
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/cephfs-external-provisioner-runner
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/cephfs-external-provisioner-runner
@@ -0,0 +1,115 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-external-provisioner-runner
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - patch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - update
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - events
+ verbs:
+ - list
+ - watch
+ - create
+ - update
+ - patch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - volumeattachments
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - volumeattachments/status
+ verbs:
+ - patch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims/status
+ verbs:
+ - patch
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshots
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotclasses
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotcontents
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - update
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotcontents/status
+ verbs:
+ - update
+ - patch
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rbd-csi-nodeplugin
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rbd-csi-nodeplugin
@@ -0,0 +1,58 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-csi-nodeplugin
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - volumeattachments
+ verbs:
+ - get
+ - list
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+- apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - get
+- apiGroups:
+ - ''
+ resources:
+ - serviceaccounts/token
+ verbs:
+ - create
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rbd-external-provisioner-runner
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/rbd-external-provisioner-runner
@@ -0,0 +1,158 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-external-provisioner-runner
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - patch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims
+ verbs:
+ - get
+ - list
+ - watch
+ - update
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - events
+ verbs:
+ - list
+ - watch
+ - create
+ - update
+ - patch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - volumeattachments
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - volumeattachments/status
+ verbs:
+ - patch
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - csinodes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims/status
+ verbs:
+ - patch
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshots
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotclasses
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotcontents
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - update
+- apiGroups:
+ - snapshot.storage.k8s.io
+ resources:
+ - volumesnapshotcontents/status
+ verbs:
+ - update
+ - patch
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+- apiGroups:
+ - ''
+ resources:
+ - serviceaccounts
+ verbs:
+ - get
+- apiGroups:
+ - ''
+ resources:
+ - serviceaccounts/token
+ verbs:
+ - create
+- apiGroups:
+ - ''
+ resources:
+ - nodes
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - storage.k8s.io
+ resources:
+ - csinodes
+ verbs:
+ - get
+ - list
+ - watch
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/objectstorage-provisioner-role
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRole: rook-ceph/objectstorage-provisioner-role
@@ -0,0 +1,50 @@
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: objectstorage-provisioner-role
+ labels:
+ app.kubernetes.io/part-of: container-object-storage-interface
+ app.kubernetes.io/component: driver-ceph
+ app.kubernetes.io/name: cosi-driver-ceph
+rules:
+- apiGroups:
+ - objectstorage.k8s.io
+ resources:
+ - buckets
+ - bucketaccesses
+ - bucketclaims
+ - bucketaccessclasses
+ - buckets/status
+ - bucketaccesses/status
+ - bucketclaims/status
+ - bucketaccessclasses/status
+ verbs:
+ - get
+ - list
+ - watch
+ - update
+ - create
+ - delete
+- apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - get
+ - watch
+ - list
+ - delete
+ - update
+ - create
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ - events
+ verbs:
+ - get
+ - delete
+ - update
+ - create
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-mgr-cluster
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-mgr-cluster
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr-cluster
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-mgr-cluster
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-osd
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-osd
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-osd
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-osd
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-system
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-system
@@ -0,0 +1,20 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-system
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-system
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-global
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-global
@@ -0,0 +1,20 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-global
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-global
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-object-bucket
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rook-ceph-object-bucket
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-object-bucket
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-object-bucket
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rbd-csi-nodeplugin
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rbd-csi-nodeplugin
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-csi-nodeplugin
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-rbd-plugin-sa
+ namespace: rook-ceph
+roleRef:
+ kind: ClusterRole
+ name: rbd-csi-nodeplugin
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/cephfs-csi-provisioner-role
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/cephfs-csi-provisioner-role
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-csi-provisioner-role
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-cephfs-provisioner-sa
+ namespace: rook-ceph
+roleRef:
+ kind: ClusterRole
+ name: cephfs-external-provisioner-runner
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/cephfs-csi-nodeplugin-role
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/cephfs-csi-nodeplugin-role
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-csi-nodeplugin-role
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-cephfs-plugin-sa
+ namespace: rook-ceph
+roleRef:
+ kind: ClusterRole
+ name: cephfs-csi-nodeplugin
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rbd-csi-provisioner-role
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/rbd-csi-provisioner-role
@@ -0,0 +1,14 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-csi-provisioner-role
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-rbd-provisioner-sa
+ namespace: rook-ceph
+roleRef:
+ kind: ClusterRole
+ name: rbd-external-provisioner-runner
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/objectstorage-provisioner-role-binding
+++ HelmRelease: rook-ceph/rook-ceph-operator ClusterRoleBinding: rook-ceph/objectstorage-provisioner-role-binding
@@ -0,0 +1,18 @@
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: objectstorage-provisioner-role-binding
+ labels:
+ app.kubernetes.io/part-of: container-object-storage-interface
+ app.kubernetes.io/component: driver-ceph
+ app.kubernetes.io/name: cosi-driver-ceph
+subjects:
+- kind: ServiceAccount
+ name: objectstorage-provisioner
+ namespace: rook-ceph
+roleRef:
+ kind: ClusterRole
+ name: objectstorage-provisioner-role
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-osd
@@ -0,0 +1,37 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-osd
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - secrets
+ verbs:
+ - get
+ - update
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - ceph.rook.io
+ resources:
+ - cephclusters
+ - cephclusters/finalizers
+ verbs:
+ - get
+ - list
+ - create
+ - update
+ - delete
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-rgw
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-rgw
@@ -0,0 +1,14 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-rgw
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-mgr
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-mgr
@@ -0,0 +1,74 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - services
+ - pods/log
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - batch
+ resources:
+ - jobs
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - ceph.rook.io
+ resources:
+ - cephclients
+ - cephclusters
+ - cephblockpools
+ - cephfilesystems
+ - cephnfses
+ - cephobjectstores
+ - cephobjectstoreusers
+ - cephobjectrealms
+ - cephobjectzonegroups
+ - cephobjectzones
+ - cephbuckettopics
+ - cephbucketnotifications
+ - cephrbdmirrors
+ - cephfilesystemmirrors
+ - cephfilesystemsubvolumegroups
+ - cephblockpoolradosnamespaces
+ - cephcosidrivers
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - patch
+- apiGroups:
+ - apps
+ resources:
+ - deployments/scale
+ - deployments
+ verbs:
+ - patch
+ - delete
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims
+ verbs:
+ - delete
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-cmd-reporter
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-cmd-reporter
@@ -0,0 +1,20 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-cmd-reporter
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - configmaps
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-purge-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-purge-osd
@@ -0,0 +1,38 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-purge-osd
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - configmaps
+ verbs:
+ - get
+- apiGroups:
+ - apps
+ resources:
+ - deployments
+ verbs:
+ - get
+ - delete
+- apiGroups:
+ - batch
+ resources:
+ - jobs
+ verbs:
+ - get
+ - list
+ - delete
+- apiGroups:
+ - ''
+ resources:
+ - persistentvolumeclaims
+ verbs:
+ - get
+ - update
+ - delete
+ - list
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-monitoring
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-monitoring
@@ -0,0 +1,19 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-monitoring
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - servicemonitors
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-monitoring-mgr
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-monitoring-mgr
@@ -0,0 +1,17 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-monitoring-mgr
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - servicemonitors
+ verbs:
+ - get
+ - list
+ - create
+ - update
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-system
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rook-ceph-system
@@ -0,0 +1,65 @@
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: rook-ceph-system
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+rules:
+- apiGroups:
+ - ''
+ resources:
+ - pods
+ - configmaps
+ - services
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - create
+ - update
+ - delete
+- apiGroups:
+ - apps
+ - extensions
+ resources:
+ - daemonsets
+ - statefulsets
+ - deployments
+ verbs:
+ - get
+ - list
+ - watch
+ - create
+ - update
+ - delete
+ - deletecollection
+- apiGroups:
+ - batch
+ resources:
+ - cronjobs
+ verbs:
+ - delete
+- apiGroups:
+ - cert-manager.io
+ resources:
+ - certificates
+ - issuers
+ verbs:
+ - get
+ - create
+ - delete
+- apiGroups:
+ - multicluster.x-k8s.io
+ resources:
+ - serviceexports
+ verbs:
+ - get
+ - create
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/cephfs-external-provisioner-cfg
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/cephfs-external-provisioner-cfg
@@ -0,0 +1,19 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-external-provisioner-cfg
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - get
+ - watch
+ - list
+ - delete
+ - update
+ - create
+
--- HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rbd-external-provisioner-cfg
+++ HelmRelease: rook-ceph/rook-ceph-operator Role: rook-ceph/rbd-external-provisioner-cfg
@@ -0,0 +1,19 @@
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-external-provisioner-cfg
+ namespace: rook-ceph
+rules:
+- apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - get
+ - watch
+ - list
+ - delete
+ - update
+ - create
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-cluster-mgmt
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-cluster-mgmt
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-cluster-mgmt
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-cluster-mgmt
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-osd
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-osd
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-osd
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-osd
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-rgw
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-rgw
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-rgw
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-rgw
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-rgw
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-mgr
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-mgr
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-mgr
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-mgr-system
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-mgr-system
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-mgr-system
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: rook-ceph-mgr-system
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-cmd-reporter
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-cmd-reporter
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-cmd-reporter
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-cmd-reporter
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-cmd-reporter
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-purge-osd
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-purge-osd
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-purge-osd
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-purge-osd
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-purge-osd
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-monitoring
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-monitoring
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-monitoring
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-monitoring
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-monitoring-mgr
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-monitoring-mgr
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-monitoring-mgr
+ namespace: rook-ceph
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-monitoring-mgr
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-mgr
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-system
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rook-ceph-system
@@ -0,0 +1,21 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rook-ceph-system
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: rook-ceph-system
+subjects:
+- kind: ServiceAccount
+ name: rook-ceph-system
+ namespace: rook-ceph
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/cephfs-csi-provisioner-role-cfg
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/cephfs-csi-provisioner-role-cfg
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-csi-provisioner-role-cfg
+ namespace: rook-ceph
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-cephfs-provisioner-sa
+ namespace: rook-ceph
+roleRef:
+ kind: Role
+ name: cephfs-external-provisioner-cfg
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rbd-csi-provisioner-role-cfg
+++ HelmRelease: rook-ceph/rook-ceph-operator RoleBinding: rook-ceph/rbd-csi-provisioner-role-cfg
@@ -0,0 +1,15 @@
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-csi-provisioner-role-cfg
+ namespace: rook-ceph
+subjects:
+- kind: ServiceAccount
+ name: rook-csi-rbd-provisioner-sa
+ namespace: rook-ceph
+roleRef:
+ kind: Role
+ name: rbd-external-provisioner-cfg
+ apiGroup: rbac.authorization.k8s.io
+
--- HelmRelease: rook-ceph/rook-ceph-operator Deployment: rook-ceph/rook-ceph-operator
+++ HelmRelease: rook-ceph/rook-ceph-operator Deployment: rook-ceph/rook-ceph-operator
@@ -0,0 +1,83 @@
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: rook-ceph-operator
+ namespace: rook-ceph
+ labels:
+ operator: rook
+ storage-backend: ceph
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: rook-ceph-operator
+ strategy:
+ type: Recreate
+ template:
+ metadata:
+ labels:
+ app: rook-ceph-operator
+ spec:
+ tolerations:
+ - effect: NoExecute
+ key: node.kubernetes.io/unreachable
+ operator: Exists
+ tolerationSeconds: 5
+ containers:
+ - name: rook-ceph-operator
+ image: rook/ceph:v1.13.3
+ imagePullPolicy: IfNotPresent
+ args:
+ - ceph
+ - operator
+ securityContext:
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 2016
+ runAsNonRoot: true
+ runAsUser: 2016
+ volumeMounts:
+ - mountPath: /var/lib/rook
+ name: rook-config
+ - mountPath: /etc/ceph
+ name: default-config-dir
+ env:
+ - name: ROOK_CURRENT_NAMESPACE_ONLY
+ value: 'false'
+ - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
+ value: 'false'
+ - name: ROOK_DISABLE_DEVICE_HOTPLUG
+ value: 'false'
+ - name: ROOK_DISCOVER_DEVICES_INTERVAL
+ value: 60m
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ resources:
+ limits:
+ cpu: 1500m
+ memory: 512Mi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ serviceAccountName: rook-ceph-system
+ volumes:
+ - name: rook-config
+ emptyDir: {}
+ - name: default-config-dir
+ emptyDir: {}
+
--- HelmRelease: rook-ceph/rook-ceph-operator ServiceMonitor: rook-ceph/csi-metrics
+++ HelmRelease: rook-ceph/rook-ceph-operator ServiceMonitor: rook-ceph/csi-metrics
@@ -0,0 +1,22 @@
+---
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: csi-metrics
+ namespace: rook-ceph
+ labels:
+ app.kubernetes.io/part-of: rook-ceph-operator
+ app.kubernetes.io/managed-by: Helm
+ app.kubernetes.io/created-by: helm
+spec:
+ namespaceSelector:
+ matchNames:
+ - rook-ceph
+ selector:
+ matchLabels:
+ app: csi-metrics
+ endpoints:
+ - port: csi-http-metrics
+ path: /metrics
+ interval: 5s
+
--- HelmRelease: rook-ceph/rook-ceph-cluster ConfigMap: rook-ceph/rook-config-override
+++ HelmRelease: rook-ceph/rook-ceph-cluster ConfigMap: rook-ceph/rook-config-override
@@ -0,0 +1,14 @@
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: rook-config-override
+ namespace: rook-ceph
+data:
+ config: |2
+
+ [global]
+ bdev_enable_discard = true
+ bdev_async_discard = true
+ osd_class_update_on_start = false
+
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block
+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-block
@@ -0,0 +1,24 @@
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: ceph-block
+ annotations:
+ storageclass.kubernetes.io/is-default-class: 'true'
+provisioner: rook-ceph.rbd.csi.ceph.com
+parameters:
+ pool: ceph-blockpool
+ clusterID: rook-ceph
+ csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
+ csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
+ csi.storage.k8s.io/fstype: ext4
+ csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
+ csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
+ csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
+ csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
+ imageFeatures: layering
+ imageFormat: '2'
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+volumeBindingMode: Immediate
+
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem
+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-filesystem
@@ -0,0 +1,23 @@
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: ceph-filesystem
+ annotations:
+ storageclass.kubernetes.io/is-default-class: 'false'
+provisioner: rook-ceph.cephfs.csi.ceph.com
+parameters:
+ fsName: ceph-filesystem
+ pool: ceph-filesystem-data0
+ clusterID: rook-ceph
+ csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
+ csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
+ csi.storage.k8s.io/fstype: ext4
+ csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
+ csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
+ csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
+ csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
+reclaimPolicy: Delete
+allowVolumeExpansion: true
+volumeBindingMode: Immediate
+
--- HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-bucket
+++ HelmRelease: rook-ceph/rook-ceph-cluster StorageClass: rook-ceph/ceph-bucket
@@ -0,0 +1,13 @@
+---
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: ceph-bucket
+provisioner: rook-ceph.ceph.rook.io/bucket
+reclaimPolicy: Delete
+volumeBindingMode: Immediate
+parameters:
+ objectStoreName: ceph-objectstore
+ objectStoreNamespace: rook-ceph
+ region: us-east-1
+
--- HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
+++ HelmRelease: rook-ceph/rook-ceph-cluster Deployment: rook-ceph/rook-ceph-tools
@@ -0,0 +1,140 @@
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: rook-ceph-tools
+ namespace: rook-ceph
+ labels:
+ app: rook-ceph-tools
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: rook-ceph-tools
+ template:
+ metadata:
+ labels:
+ app: rook-ceph-tools
+ spec:
+ dnsPolicy: ClusterFirstWithHostNet
+ hostNetwork: true
+ containers:
+ - name: rook-ceph-tools
+ image: quay.io/ceph/ceph:v18.2.1
+ command:
+ - /bin/bash
+ - -c
+ - |
+ # Replicate the script from toolbox.sh inline so the ceph image
+ # can be run directly, instead of requiring the rook toolbox
+ CEPH_CONFIG="/etc/ceph/ceph.conf"
+ MON_CONFIG="/etc/rook/mon-endpoints"
+ KEYRING_FILE="/etc/ceph/keyring"
+
+ # create a ceph config file in its default location so ceph/rados tools can be used
+ # without specifying any arguments
+ write_endpoints() {
+ endpoints=$(cat ${MON_CONFIG})
+
+ # filter out the mon names
+ # external cluster can have numbers or hyphens in mon names, handling them in regex
+ # shellcheck disable=SC2001
+ mon_endpoints=$(echo "${endpoints}"| sed 's/[a-z0-9_-]\+=//g')
+
+ DATE=$(date)
+ echo "$DATE writing mon endpoints to ${CEPH_CONFIG}: ${endpoints}"
+ cat <<EOF > ${CEPH_CONFIG}
+ [global]
+ mon_host = ${mon_endpoints}
+
+ [client.admin]
+ keyring = ${KEYRING_FILE}
+ EOF
+ }
+
+ # watch the endpoints config file and update if the mon endpoints ever change
+ watch_endpoints() {
+ # get the timestamp for the target of the soft link
+ real_path=$(realpath ${MON_CONFIG})
+ initial_time=$(stat -c %Z "${real_path}")
+ while true; do
+ real_path=$(realpath ${MON_CONFIG})
+ latest_time=$(stat -c %Z "${real_path}")
+
+ if [[ "${latest_time}" != "${initial_time}" ]]; then
+ write_endpoints
+ initial_time=${latest_time}
+ fi
+
+ sleep 10
+ done
+ }
+
+ # read the secret from an env var (for backward compatibility), or from the secret file
+ ceph_secret=${ROOK_CEPH_SECRET}
+ if [[ "$ceph_secret" == "" ]]; then
+ ceph_secret=$(cat /var/lib/rook-ceph-mon/secret.keyring)
+ fi
+
+ # create the keyring file
+ cat <<EOF > ${KEYRING_FILE}
+ [${ROOK_CEPH_USERNAME}]
+ key = ${ceph_secret}
+ EOF
+
+ # write the initial config file
+ write_endpoints
+
+ # continuously update the mon endpoints if they fail over
+ watch_endpoints
+ imagePullPolicy: IfNotPresent
+ tty: true
+ securityContext:
+ capabilities:
+ drop:
+ - ALL
+ runAsGroup: 2016
+ runAsNonRoot: true
+ runAsUser: 2016
+ env:
+ - name: ROOK_CEPH_USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: rook-ceph-mon
+ key: ceph-username
+ resources:
+ limits:
+ cpu: 500m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ volumeMounts:
+ - mountPath: /etc/ceph
+ name: ceph-config
+ - name: mon-endpoint-volume
+ mountPath: /etc/rook
+ - name: ceph-admin-secret
+ mountPath: /var/lib/rook-ceph-mon
+ volumes:
+ - name: ceph-admin-secret
+ secret:
+ secretName: rook-ceph-mon
+ optional: false
+ items:
+ - key: ceph-secret
+ path: secret.keyring
+ - name: mon-endpoint-volume
+ configMap:
+ name: rook-ceph-mon-endpoints
+ items:
+ - key: data
+ path: mon-endpoints
+ - name: ceph-config
+ emptyDir: {}
+ tolerations:
+ - key: node.kubernetes.io/unreachable
+ operator: Exists
+ effect: NoExecute
+ tolerationSeconds: 5
+
--- HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard
+++ HelmRelease: rook-ceph/rook-ceph-cluster Ingress: rook-ceph/rook-ceph-dashboard
@@ -0,0 +1,23 @@
+---
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: rook-ceph-dashboard
+ namespace: rook-ceph
+spec:
+ rules:
+ - host: rook.${SECRET_DOMAIN}
+ http:
+ paths:
+ - path: /
+ backend:
+ service:
+ name: rook-ceph-mgr-dashboard
+ port:
+ name: http-dashboard
+ pathType: Prefix
+ ingressClassName: internal
+ tls:
+ - hosts:
+ - rook.${SECRET_DOMAIN}
+
--- HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephBlockPool: rook-ceph/ceph-blockpool
@@ -0,0 +1,11 @@
+---
+apiVersion: ceph.rook.io/v1
+kind: CephBlockPool
+metadata:
+ name: ceph-blockpool
+ namespace: rook-ceph
+spec:
+ failureDomain: host
+ replicated:
+ size: 3
+
--- HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephCluster: rook-ceph/rook-ceph
@@ -0,0 +1,169 @@
+---
+apiVersion: ceph.rook.io/v1
+kind: CephCluster
+metadata:
+ name: rook-ceph
+ namespace: rook-ceph
+spec:
+ monitoring:
+ enabled: true
+ cephVersion:
+ allowUnsupported: false
+ image: quay.io/ceph/ceph:v18.2.1
+ cleanupPolicy:
+ allowUninstallWithVolumes: false
+ confirmation: ''
+ sanitizeDisks:
+ dataSource: zero
+ iteration: 1
+ method: quick
+ continueUpgradeAfterChecksEvenIfNotHealthy: false
+ crashCollector:
+ disable: false
+ dashboard:
+ enabled: true
+ ssl: false
+ urlPrefix: /
+ dataDirHostPath: /var/lib/rook
+ disruptionManagement:
+ managePodBudgets: true
+ osdMaintenanceTimeout: 30
+ pgHealthCheckTimeout: 0
+ healthCheck:
+ daemonHealth:
+ mon:
+ disabled: false
+ interval: 45s
+ osd:
+ disabled: false
+ interval: 60s
+ status:
+ disabled: false
+ interval: 60s
+ livenessProbe:
+ mgr:
+ disabled: false
+ mon:
+ disabled: false
+ osd:
+ disabled: false
+ logCollector:
+ enabled: true
+ maxLogSize: 500M
+ periodicity: daily
+ mgr:
+ allowMultiplePerNode: false
+ count: 2
+ modules:
+ - enabled: true
+ name: pg_autoscaler
+ mon:
+ allowMultiplePerNode: false
+ count: 3
+ network:
+ connections:
+ compression:
+ enabled: false
+ encryption:
+ enabled: false
+ requireMsgr2: true
+ provider: host
+ placement:
+ mgr:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-role.kubernetes.io/control-plane
+ operator: Exists
+ mon:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-role.kubernetes.io/control-plane
+ operator: Exists
+ priorityClassNames:
+ mgr: system-cluster-critical
+ mon: system-node-critical
+ osd: system-node-critical
+ removeOSDsIfOutAndSafeToRemove: false
+ resources:
+ cleanup:
+ limits:
+ cpu: 500m
+ memory: 1Gi
+ requests:
+ cpu: 500m
+ memory: 100Mi
+ crashcollector:
+ limits:
+ cpu: 500m
+ memory: 60Mi
+ requests:
+ cpu: 100m
+ memory: 60Mi
+ exporter:
+ limits:
+ cpu: 250m
+ memory: 128Mi
+ requests:
+ cpu: 50m
+ memory: 50Mi
+ logcollector:
+ limits:
+ cpu: 500m
+ memory: 1Gi
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ mgr:
+ limits:
+ cpu: 2000m
+ memory: 2Gi
+ requests:
+ cpu: 500m
+ memory: 512Mi
+ mgr-sidecar:
+ limits:
+ cpu: 500m
+ memory: 100Mi
+ requests:
+ cpu: 100m
+ memory: 40Mi
+ mon:
+ limits:
+ cpu: 4000m
+ memory: 4Gi
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ osd:
+ limits:
+ cpu: 4000m
+ memory: 8Gi
+ requests:
+ cpu: 1000m
+ memory: 4Gi
+ prepareosd:
+ requests:
+ cpu: 500m
+ memory: 50Mi
+ skipUpgradeChecks: false
+ storage:
+ config:
+ osdsPerDevice: '1'
+ nodes:
+ - devices:
+ - name: /dev/sda
+ name: odroid-01
+ - devices:
+ - name: /dev/sda
+ name: odroid-02
+ - devices:
+ - name: /dev/sda
+ name: odroid-03
+ useAllDevices: false
+ useAllNodes: false
+ waitTimeoutForHealthyOSDInMinutes: 10
+
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystem: rook-ceph/ceph-filesystem
@@ -0,0 +1,27 @@
+---
+apiVersion: ceph.rook.io/v1
+kind: CephFilesystem
+metadata:
+ name: ceph-filesystem
+ namespace: rook-ceph
+spec:
+ dataPools:
+ - failureDomain: host
+ name: data0
+ replicated:
+ size: 3
+ metadataPool:
+ replicated:
+ size: 3
+ metadataServer:
+ activeCount: 1
+ activeStandby: true
+ priorityClassName: system-cluster-critical
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 4Gi
+ requests:
+ cpu: 1000m
+ memory: 4Gi
+
--- HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephFilesystemSubVolumeGroup: rook-ceph/ceph-filesystem-csi
@@ -0,0 +1,12 @@
+---
+apiVersion: ceph.rook.io/v1
+kind: CephFilesystemSubVolumeGroup
+metadata:
+ name: ceph-filesystem-csi
+ namespace: rook-ceph
+spec:
+ name: csi
+ filesystemName: ceph-filesystem
+ pinning:
+ distributed: 1
+
--- HelmRelease: rook-ceph/rook-ceph-cluster CephObjectStore: rook-ceph/ceph-objectstore
+++ HelmRelease: rook-ceph/rook-ceph-cluster CephObjectStore: rook-ceph/ceph-objectstore
@@ -0,0 +1,29 @@
+---
+apiVersion: ceph.rook.io/v1
+kind: CephObjectStore
+metadata:
+ name: ceph-objectstore
+ namespace: rook-ceph
+spec:
+ dataPool:
+ erasureCoded:
+ codingChunks: 1
+ dataChunks: 2
+ failureDomain: host
+ gateway:
+ instances: 1
+ port: 80
+ priorityClassName: system-cluster-critical
+ resources:
+ limits:
+ cpu: 2000m
+ memory: 2Gi
+ requests:
+ cpu: 1000m
+ memory: 1Gi
+ metadataPool:
+ failureDomain: host
+ replicated:
+ size: 3
+ preservePoolsOnDelete: true
+
--- HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
+++ HelmRelease: rook-ceph/rook-ceph-cluster PrometheusRule: rook-ceph/prometheus-ceph-rules
@@ -0,0 +1,766 @@
+---
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ labels:
+ prometheus: rook-prometheus
+ role: alert-rules
+ name: prometheus-ceph-rules
+ namespace: rook-ceph
+spec:
+ groups:
+ - name: cluster health
+ rules:
+ - alert: CephHealthError
+ annotations:
+ description: The cluster state has been HEALTH_ERROR for more than 5 minutes.
+ Please check 'ceph health detail' for more information.
+ summary: Ceph is in the ERROR state
+ expr: ceph_health_status == 2
+ for: 5m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.2.1
+ severity: critical
+ type: ceph_default
+ - alert: CephHealthWarning
+ annotations:
+ description: The cluster state has been HEALTH_WARN for more than 15 minutes.
+ Please check 'ceph health detail' for more information.
+ summary: Ceph is in the WARNING state
+ expr: ceph_health_status == 1
+ for: 15m
+ labels:
+ severity: warning
+ type: ceph_default
+ - name: mon
+ rules:
+ - alert: CephMonDownQuorumAtRisk
+ annotations:
+ description: '{{ $min := query "floor(count(ceph_mon_metadata) / 2) + 1" |
+ first | value }}Quorum requires a majority of monitors (x {{ $min }}) to
+ be active. Without quorum the cluster will become inoperable, affecting
+ all services and connected clients. The following monitors are down: {{-
+ range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname)
+ (ceph_mon_metadata * 0)" }} - {{ .Labels.ceph_daemon }} on {{ .Labels.hostname
+ }} {{- end }}'
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
+ summary: Monitor quorum is at risk
+ expr: |
+ (
+ (ceph_health_detail{name="MON_DOWN"} == 1) * on() (
+ count(ceph_mon_quorum_status == 1) == bool (floor(count(ceph_mon_metadata) / 2) + 1)
+ )
+ ) == 1
+ for: 30s
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.3.1
+ severity: critical
+ type: ceph_default
+ - alert: CephMonDown
+ annotations:
+ description: |
+ {{ $down := query "count(ceph_mon_quorum_status == 0)" | first | value }}{{ $s := "" }}{{ if gt $down 1.0 }}{{ $s = "s" }}{{ end }}You have {{ $down }} monitor{{ $s }} down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: {{- range query "(ceph_mon_quorum_status == 0) + on(ceph_daemon) group_left(hostname) (ceph_mon_metadata * 0)" }} - {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }} {{- end }}
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down
+ summary: One or more monitors down
+ expr: |
+ count(ceph_mon_quorum_status == 0) <= (count(ceph_mon_metadata) - floor(count(ceph_mon_metadata) / 2) + 1)
+ for: 30s
+ labels:
+ severity: warning
+ type: ceph_default
+ - alert: CephMonDiskspaceCritical
+ annotations:
+ description: The free space available to a monitor's store is critically low.
+ You should increase the space available to the monitor(s). The default directory
+ is /var/lib/ceph/mon-*/data/store.db on traditional deployments, and /var/lib/rook/mon-*/data/store.db
+ on the mon pod's worker node for Rook. Look for old, rotated versions of
+ *.log and MANIFEST*. Do NOT touch any *.sst files. Also check any other
+ directories under /var/lib/rook and other directories on the same filesystem,
+ often /var/log and /var/tmp are culprits. Your monitor hosts are; {{- range
+ query "ceph_mon_metadata"}} - {{ .Labels.hostname }} {{- end }}
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-crit
+ summary: Filesystem space on at least one monitor is critically low
+ expr: ceph_health_detail{name="MON_DISK_CRIT"} == 1
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.3.2
+ severity: critical
+ type: ceph_default
+ - alert: CephMonDiskspaceLow
+ annotations:
+ description: The space available to a monitor's store is approaching full
+ (>70% is the default). You should increase the space available to the monitor(s).
+ The default directory is /var/lib/ceph/mon-*/data/store.db on traditional
+ deployments, and /var/lib/rook/mon-*/data/store.db on the mon pod's worker
+ node for Rook. Look for old, rotated versions of *.log and MANIFEST*. Do
+ NOT touch any *.sst files. Also check any other directories under /var/lib/rook
+ and other directories on the same filesystem, often /var/log and /var/tmp
+ are culprits. Your monitor hosts are; {{- range query "ceph_mon_metadata"}}
+ - {{ .Labels.hostname }} {{- end }}
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-disk-low
+ summary: Drive space on at least one monitor is approaching full
+ expr: ceph_health_detail{name="MON_DISK_LOW"} == 1
+ for: 5m
+ labels:
+ severity: warning
+ type: ceph_default
+ - alert: CephMonClockSkew
+ annotations:
+ description: Ceph monitors rely on closely synchronized time to maintain quorum
+ and cluster consistency. This event indicates that the time on at least
+ one mon has drifted too far from the lead mon. Review cluster status with
+ ceph -s. This will show which monitors are affected. Check the time sync
+ status on each monitor host with 'ceph time-sync-status' and the state and
+ peers of your ntpd or chrony daemon.
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-clock-skew
+ summary: Clock skew detected among monitors
+ expr: ceph_health_detail{name="MON_CLOCK_SKEW"} == 1
+ for: 1m
+ labels:
+ severity: warning
+ type: ceph_default
+ - name: osd
+ rules:
+ - alert: CephOSDDownHigh
+ annotations:
+ description: '{{ $value | humanize }}% or {{ with query "count(ceph_osd_up
+ == 0)" }}{{ . | first | value }}{{ end }} of {{ with query "count(ceph_osd_up)"
+ }}{{ . | first | value }}{{ end }} OSDs are down (>= 10%). The following
+ OSDs are down: {{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname)
+ ceph_osd_metadata) == 0" }} - {{ .Labels.ceph_daemon }} on {{ .Labels.hostname
+ }} {{- end }}'
+ summary: More than 10% of OSDs are down
+ expr: count(ceph_osd_up == 0) / count(ceph_osd_up) * 100 >= 10
+ for: 5m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.4.1
+ severity: critical
+ type: ceph_default
+ - alert: CephOSDHostDown
+ annotations:
+ description: 'The following OSDs are down: {{- range query "(ceph_osd_up *
+ on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0" }} - {{ .Labels.hostname
+ }} : {{ .Labels.ceph_daemon }} {{- end }}'
+ summary: An OSD host is offline
+ expr: ceph_health_detail{name="OSD_HOST_DOWN"} == 1
+ for: 5m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.4.8
+ severity: warning
+ type: ceph_default
+ - alert: CephOSDDown
+ annotations:
+ description: |
+ {{ $num := query "count(ceph_osd_up == 0)" | first | value }}{{ $s := "" }}{{ if gt $num 1.0 }}{{ $s = "s" }}{{ end }}{{ $num }} OSD{{ $s }} down for over 5mins. The following OSD{{ $s }} {{ if eq $s "" }}is{{ else }}are{{ end }} down: {{- range query "(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0"}} - {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }} {{- end }}
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-down
+ summary: An OSD has been marked down
+ expr: ceph_health_detail{name="OSD_DOWN"} == 1
+ for: 5m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.4.2
+ severity: warning
+ type: ceph_default
+ - alert: CephOSDNearFull
+ annotations:
+ description: One or more OSDs have reached the NEARFULL threshold. Use 'ceph
+ health detail' and 'ceph osd df' to identify the problem. To resolve, add
+ capacity to the affected OSD's failure domain, restore down/out OSDs, or
+ delete unwanted data.
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-nearfull
+ summary: OSD(s) running low on free space (NEARFULL)
+ expr: ceph_health_detail{name="OSD_NEARFULL"} == 1
+ for: 5m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.4.3
+ severity: warning
+ type: ceph_default
+ - alert: CephOSDFull
+ annotations:
+ description: An OSD has reached the FULL threshold. Writes to pools that share
+ the affected OSD will be blocked. Use 'ceph health detail' and 'ceph osd
+ df' to identify the problem. To resolve, add capacity to the affected OSD's
+ failure domain, restore down/out OSDs, or delete unwanted data.
+ documentation: https://docs.ceph.com/en/latest/rados/operations/health-checks#osd-full
+ summary: OSD full, writes blocked
+ expr: ceph_health_detail{name="OSD_FULL"} > 0
+ for: 1m
+ labels:
+ oid: 1.3.6.1.4.1.50495.1.2.1.4.6
+ severity: critical
+ type: ceph_default
+ - alert: CephOSDBackfillFull
+ annotations:
+ description: An OSD has reached the BACKFILL FULL threshold. This will prevent
+ rebalance operations from completing. Use 'ceph health detail' and 'ceph
+ osd df' to identify the problem. To resolve, add capacity to the affected
[Diff truncated by flux-local]
--- HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool
+++ HelmRelease: rook-ceph/rook-ceph-cluster VolumeSnapshotClass: rook-ceph/csi-ceph-blockpool
@@ -0,0 +1,14 @@
+---
+apiVersion: snapshot.storage.k8s.io/v1
+kind: VolumeSnapshotClass
+metadata:
+ name: csi-ceph-blockpool
+ annotations:
+ snapshot.storage.kubernetes.io/is-default-class: 'false'
+driver: rook-ceph.rbd.csi.ceph.com
+parameters:
+ clusterID: rook-ceph
+ csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
+ csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
+deletionPolicy: Delete
+ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.