diff --git a/content/changelog.md b/content/changelog.md index a28a238ca..d4b9fdd40 100644 --- a/content/changelog.md +++ b/content/changelog.md @@ -1,5 +1,35 @@ # Changelog +## v0.12.x + +### v0.12.1 + +#### Fix + +- Resolved memory consumption problems in multiple controllers by reducing the number of reconciliations. + +### v0.12.0 + +#### Feature + +#### Enhanced + +- Updated Tenant CR to v1beta3, more details in [Tenant CRD](./crds-api-reference/tenant.md) +- Added custom pricing support for Opencost, more details in [Opencost](./crds-api-reference/integration-config.md#Custom-Pricing-Model) + +#### Fix + +- Resolved an issue in Templates that prevented the deployment of public helm charts. + +## v0.11.x + +### v0.11.0 + +#### Feature + +- Added support for configuring external keycloak in integrationconfig. +- Added free tier support that allows creation of 2 tenants without license. + ## v0.10.x ### v0.10.6 diff --git a/content/crds-api-reference/extensions.md b/content/crds-api-reference/extensions.md new file mode 100644 index 000000000..c12429198 --- /dev/null +++ b/content/crds-api-reference/extensions.md @@ -0,0 +1,55 @@ +# Extensions + +Extensions in MTO enhance its functionality by allowing integration with external services. Currently, MTO supports integration with ArgoCD, enabling you to synchronize your repositories and configure AppProjects directly through MTO. Future updates will include support for additional integrations. + +## Configuring ArgoCD Integration + +Let us take a look at how you can create an Extension CR and integrate ArgoCD with MTO. + +Before you create an Extension CR, you need to modify the Integration Config resource and add the ArgoCD configuration. + +```yaml + integrations: + argocd: + clusterResourceWhitelist: + - group: tronador.stakater.com + kind: EnvironmentProvisioner + namespaceResourceBlacklist: + - group: '' + kind: ResourceQuota + namespace: openshift-operators +``` + +The above configuration will allow the `EnvironmentProvisioner` CRD and blacklist the `ResourceQuota` resource. Also note that the `namespace` field is mandatory and should be set to the namespace where the ArgoCD is deployed. + +Every Extension CR is associated with a specific Tenant. Here's an example of an Extension CR that is associated with a Tenant named `tenant-sample`: + +```yaml +apiVersion: tenantoperator.stakater.com/v1alpha1 +kind: Extensions +metadata: + name: extensions-sample +spec: + tenantName: tenant-sample + argoCDConfig: + purgeAppProjectOnDelete: true + sourceRepos: + - "github.com/stakater/repo" + appProject: + clusterResourceWhitelist: + - group: "" + kind: "Pod" + namespaceResourceBlacklist: + - group: "v1" + kind: "ConfigMap" +``` + +The above CR creates an Extension for the Tenant named `tenant-sample` with the following configurations: + +- `purgeAppProjectOnDelete`: If set to `true`, the AppProject will be deleted when the Extension is deleted. +- `sourceRepos`: List of repositories to sync with ArgoCD. +- `appProject`: Configuration for the AppProject. + - `clusterResourceWhitelist`: List of cluster-scoped resources to sync. + - `namespaceResourceBlacklist`: List of namespace-scoped resources to ignore. + +In the backend, MTO will create an ArgoCD AppProject with the specified configurations. diff --git a/content/crds-api-reference/integration-config.md b/content/crds-api-reference/integration-config.md index 5602cf241..0fd723004 100644 --- a/content/crds-api-reference/integration-config.md +++ b/content/crds-api-reference/integration-config.md @@ -3,12 +3,256 @@ IntegrationConfig is used to configure settings of multi-tenancy for Multi Tenant Operator. ```yaml -apiVersion: tenantoperator.stakater.com/v1alpha1 +apiVersion: tenantoperator.stakater.com/v1beta1 kind: IntegrationConfig metadata: name: tenant-operator-config namespace: multi-tenant-operator -spec: +Spec: + components: + console: true + showback: true + ingress: + IngressClassName: 'nginx' + Keycloak: + Host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + Console: + Host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + Gateway: + Host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + trustedRootCert: my-custom-cert + accessControl: + rbac: + tenantRoles: + default: + owner: + clusterRoles: + - admin + editor: + clusterRoles: + - edit + viewer: + clusterRoles: + - view + - viewer + custom: + - labelSelector: + matchExpressions: + - key: stakater.com/kind + operator: In + values: + - build + matchLabels: + stakater.com/kind: dev + owner: + clusterRoles: + - custom-owner + editor: + clusterRoles: + - custom-editor + viewer: + clusterRoles: + - custom-viewer + - custom-view + namespaceAccessPolicy: + deny: + privilegedNamespaces: + users: + - system:serviceaccount:openshift-argocd:argocd-application-controller + - adam@stakater.com + groups: + - cluster-admins + privileged: + namespaces: + - ^default$ + - ^openshift.* + - ^kube.* + serviceAccounts: + - ^system:serviceaccount:openshift.* + - ^system:serviceaccount:kube.* + users: + - '' + groups: + - cluster-admins + metadata: + groups: + labels: + role: customer-reader + annotations: + openshift.io/node-selector: node-role.kubernetes.io/worker= + namespaces: + labels: + stakater.com/workload-monitoring: "true" + annotations: + openshift.io/node-selector: node-role.kubernetes.io/worker= + sandboxes: + labels: + stakater.com/kind: sandbox + annotations: + openshift.io/node-selector: node-role.kubernetes.io/worker= + integrations: + argocd: + clusterResourceWhitelist: + - group: tronador.stakater.com + kind: EnvironmentProvisioner + namespaceResourceBlacklist: + - group: '' # all groups + kind: ResourceQuota + namespace: openshift-operators + vault: + enabled: true + authMethod: kubernetes #enum: {kubernetes:default, Token} + accessInfo: + accessorPath: oidc/ + address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + roleName: mto + secretRef: + name: '' + namespace: '' + config: + ssoClient: vault +``` + +Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator. + +## Components + +```yaml + components: + console: true + showback: true + ingress: + IngressClassName: nginx + Keycloak: + Host: tenant-operator-keycloak.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + Console: + Host: tenant-operator-console.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + Gateway: + Host: tenant-operator-gateway.apps.mycluster-ams.abcdef.cloud + TLSSecretName: tenant-operator-tls + trustedRootCert: my-custom-cert +``` + +- `components.console:` Enables or disables the console GUI for MTO. +- `components.showback:` Enables or disables the showback feature on the console. +- `components.ingress:` Configures the ingress settings for various components: + - `ingressClassName:` Ingress class to be used for the ingress. + - `console:` Settings for the console's ingress. + - `host:` hostname for the console's ingress. + - `tlsSecretName:` Name of the secret containing the TLS certificate and key for the console's ingress. + - `gateway:` Settings for the gateway's ingress. + - `host:` hostname for the gateway's ingress. + - `tlsSecretName:` Name of the secret containing the TLS certificate and key for the gateway's ingress. + - `keycloak:` Settings for the Keycloak's ingress. + - `host:` hostname for the Keycloak's ingress. + - `tlsSecretName:` Name of the secret containing the TLS certificate and key for the Keycloak's ingress. +- `components.trustedRootCert:` Name of the secret containing the root CA certificate. + +Here's an example of how to generate the secrets required to configure MTO: + +**TLS Secret for Ingress:** + +Create a TLS secret containing your SSL/TLS certificate and key for secure communication. This secret will be used for the Console, Gateway, and Keycloak ingresses. + +```bash +kubectl -n multi-tenant-operator create secret tls --key= --cert= +``` + +**Trusted Root Certificate Secret:** + +If using a custom certificate authority (CA) or self-signed certificates, create a Kubernetes secret containing your root CA certificate. This is required in order to ensure MTO Components trust the custom certificates. + +```bash +kubectl -n multi-tenant-operator create secret generic --from-file= +``` + +>Note: `trustedRootCert` and `tls-secret-name` are optional. If not provided, MTO will use the default root CA certificate and secrets respectively. + +Integration config will be managing the following resources required for console GUI: + +- `MTO Postgresql` resources. +- `MTO Prometheus` resources. +- `MTO Opencost` resources. +- `MTO Console, Gateway, Keycloak` resources. +- `Showback` cronjob. + +Details on console GUI and showback can be found [here](../explanation/console.md) + +## Access Control + +```yaml +accessControl: + rbac: + tenantRoles: + default: + owner: + clusterRoles: + - admin + editor: + clusterRoles: + - edit + viewer: + clusterRoles: + - view + - viewer + custom: + - labelSelector: + matchExpressions: + - key: stakater.com/kind + operator: In + values: + - build + matchLabels: + stakater.com/kind: dev + owner: + clusterRoles: + - custom-owner + editor: + clusterRoles: + - custom-editor + viewer: + clusterRoles: + - custom-viewer + - custom-view + namespaceAccessPolicy: + deny: + privilegedNamespaces: + users: + - system:serviceaccount:openshift-argocd:argocd-application-controller + - adam@stakater.com + groups: + - cluster-admins + privileged: + namespaces: + - ^default$ + - ^openshift.* + - ^kube.* + serviceAccounts: + - ^system:serviceaccount:openshift.* + - ^system:serviceaccount:kube.* + users: + - '' + groups: + - cluster-admins +``` + +### RBAC + +RBAC is used to configure the roles that will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector. + +#### TenantRoles + +TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector. + +> ⚠️ If you do not configure roles in any way, then the default OpenShift roles of `owner`, `edit`, and `view` will apply to Tenant members. Their details can be found [here](../reference-guides/custom-roles.md) + +```yaml +rbac: tenantRoles: default: owner: @@ -40,107 +284,13 @@ spec: clusterRoles: - custom-viewer - custom-view - openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: - labels: - stakater.com/kind: sandbox - clusterAdminGroups: - - cluster-admins - privilegedNamespaces: - - ^default$ - - ^openshift-* - - ^kube-* - privilegedServiceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - namespaceAccessPolicy: - deny: - privilegedNamespaces: - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com - groups: - - cluster-admins - argocd: - namespace: openshift-operators - namespaceResourceBlacklist: - - group: '' # all groups - kind: ResourceQuota - clusterResourceWhitelist: - - group: tronador.stakater.com - kind: EnvironmentProvisioner - rhsso: - enabled: true - realm: customer - endpoint: - url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/ - secretReference: - name: auth-secrets - namespace: openshift-auth - vault: - enabled: true - accessorPath: oidc/ - address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' - roleName: mto - sso: - clientName: vault -``` - -Following are the different components that can be used to configure multi-tenancy in a cluster via Multi Tenant Operator. - -## TenantRoles - -TenantRoles are required within the IntegrationConfig, as they are used for defining what roles will be applied to each Tenant namespace. The field allows optional custom roles, that are then used to create RoleBindings for namespaces that match a labelSelector. - -> ⚠️ If you do not configure roles in any way, then the default OpenShift roles of `owner`, `edit`, and `view` will apply to Tenant members. Their details can be found [here](../how-to-guides/custom-roles.md) - -```yaml -tenantRoles: - default: - owner: - clusterRoles: - - admin - editor: - clusterRoles: - - edit - viewer: - clusterRoles: - - view - - viewer - custom: - - labelSelector: - matchExpressions: - - key: stakater.com/kind - operator: In - values: - - build - matchLabels: - stakater.com/kind: dev - owner: - clusterRoles: - - custom-owner - editor: - clusterRoles: - - custom-editor - viewer: - clusterRoles: - - custom-viewer - - custom-view ``` -### Default +##### Default This field contains roles that will be used to create default roleBindings for each namespace that belongs to tenants. These roleBindings are only created for a namespace if that namespace isn't already matched by the `custom` field below it. Therefore, it is required to have at least one role mentioned within each of its three subfields: `owner`, `editor`, and `viewer`. These 3 subfields also correspond to the member fields of the [Tenant CR](./tenant.md#tenant) -### Custom +##### Custom An array of custom roles. Similar to the `default` field, you can mention roles within this field as well. However, the custom roles also require the use of a `labelSelector` for each iteration within the array. The roles mentioned here will only apply to the namespaces that are matched by the labelSelector. If a namespace is matched by 2 different labelSelectors, then both roles will apply to it. Additionally, roles can be skipped within the labelSelector. These missing roles are then inherited from the `default` roles field . For example, if the following custom roles arrangement is used: @@ -161,58 +311,91 @@ custom: Then the `editor` and `viewer` roles will be taken from the `default` roles field, as that is required to have at least one role mentioned. -## OpenShift +### Namespace Access Policy + +Namespace Access Policy is used to configure the namespaces that are allowed to be created by tenants. It also allows the configuration of namespaces that are ignored by MTO. + +```yaml +namespaceAccessPolicy: + deny: + privilegedNamespaces: + groups: + - cluster-admins + users: + - system:serviceaccount:openshift-argocd:argocd-application-controller + - adam@stakater.com + privileged: + namespaces: + - ^default$ + - ^openshift.* + - ^kube.* + serviceAccounts: + - ^system:serviceaccount:openshift.* + - ^system:serviceaccount:kube.* + users: + - '' + groups: + - cluster-admins +``` -``` yaml -openshift: - project: +#### Deny + +`namespaceAccessPolicy.Deny:` Can be used to restrict privileged *users/groups* CRUD operation over managed namespaces. + +#### Privileged + +##### Namespaces + +`privileged.namespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. + +For example: + +- To ignore the `default` namespace, we can specify `^default$` +- To ignore all namespaces starting with the `openshift-` prefix, we can specify `^openshift-.*`. +- To ignore any namespace containing `stakater` in its name, we can specify `^stakater.`. (A constant word given as a regex pattern will match any namespace containing that word.) + +##### ServiceAccounts + +`privileged.serviceAccounts:` Contains the list of `ServiceAccounts` ignored by MTO. MTO will not manage the `ServiceAccounts` in this list. Values in this list are regex patterns. For example, to ignore all `ServiceAccounts` starting with the `system:serviceaccount:openshift-` prefix, we can use `^system:serviceaccount:openshift-.*`; and to ignore a specific service account like `system:serviceaccount:builder` service account we can use `^system:serviceaccount:builder$.` + +!!! note + `stakater`, `stakater.` and `stakater.*` will have the same effect. To check out the combinations, go to [Regex101](https://regex101.com/), select Golang, and type your expected regex and test string. + +##### Users + +`privileged.users:` Contains the list of `users` ignored by MTO. MTO will not manage the `users` in this list. Values in this list are regex patterns. + +##### Groups + +`privileged.groups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces. + +!!! note + User `kube:admin` is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces. + +> ⚠️ If you want to use a more complex regex pattern (for the `privileged.namespaces` or `privileged.serviceAccounts` field), it is recommended that you test the regex pattern first - either locally or using a platform such as . + +## Metadata + +```yaml +metadata: + groups: + labels: + role: customer-reader + annotations: {} + namespaces: labels: stakater.com/workload-monitoring: "true" annotations: openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: + sandboxes: labels: stakater.com/kind: sandbox - clusterAdminGroups: - - cluster-admins - privilegedNamespaces: - - ^default$ - - ^openshift-* - - ^kube-* - privilegedServiceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* - namespaceAccessPolicy: - deny: - privilegedNamespaces: - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com - groups: - - cluster-admins + annotations: {} ``` -### Project, group and sandbox +### Namespaces, group and sandbox -We can use the `openshift.project`, `openshift.group` and `openshift.sandbox` fields to automatically add `labels` and `annotations` to the **Projects** and **Groups** managed via MTO. - -```yaml - openshift: - project: - labels: - stakater.com/workload-monitoring: "true" - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= - group: - labels: - role: customer-reader - sandbox: - labels: - stakater.com/kind: sandbox -``` +We can use the `metadata.namespaces`, `metadata.group` and `metadata.sandbox` fields to automatically add `labels` and `annotations` to the **Namespaces** and **Groups** managed via MTO. If we want to add default *labels/annotations* to sandbox namespaces of tenants than we just simply add them in `openshift.project.labels`/`openshift.project.annotations` respectively. @@ -246,143 +429,101 @@ users: - andrew@stakater.com ``` -### Cluster Admin Groups +## Integrations -`clusterAdminGroups:` Contains names of the groups that are allowed to perform CRUD operations on namespaces present on the cluster. Users in the specified group(s) will be able to perform these operations without MTO getting in their way. MTO does not interfere even with the deletion of privilegedNamespaces. - -!!! note - User `kube:admin` is bypassed by default to perform operations as a cluster admin, this includes operations on all the namespaces. - -### Privileged Namespaces - -`privilegedNamespaces:` Contains the list of `namespaces` ignored by MTO. MTO will not manage the `namespaces` in this list. Treatment for privileged namespaces does not involve further integrations or finalizers processing as with normal namespaces. Values in this list are regex patterns. - -For example: - -- To ignore the `default` namespace, we can specify `^default$` -- To ignore all namespaces starting with the `openshift-` prefix, we can specify `^openshift-*`. -- To ignore any namespace containing `stakater` in its name, we can specify `stakater`. (A constant word given as a regex pattern will match any namespace containing that word.) - -### Privileged ServiceAccounts - -`privilegedServiceAccounts:` Contains the list of `ServiceAccounts` ignored by MTO. MTO will not manage the `ServiceAccounts` in this list. Values in this list are regex patterns. For example, to ignore all `ServiceAccounts` starting with the `system:serviceaccount:openshift-` prefix, we can use `^system:serviceaccount:openshift-*`; and to ignore the `system:serviceaccount:builder` service account we can use `^system:serviceaccount:builder$.` - -### Namespace Access Policy - -`namespaceAccessPolicy.Deny:` Can be used to restrict privileged *users/groups* CRUD operation over managed namespaces. +Integrations are used to configure the integrations that MTO has with other tools. Currently, MTO supports the following integrations: ```yaml -namespaceAccessPolicy: - deny: - privilegedNamespaces: - groups: - - cluster-admins - users: - - system:serviceaccount:openshift-argocd:argocd-application-controller - - adam@stakater.com +integrations: + argocd: + enabled: bool + clusterResourceWhitelist: + - group: tronador.stakater.com + kind: EnvironmentProvisioner + namespaceResourceBlacklist: + - group: '' # all groups + kind: ResourceQuota + namespace: openshift-operators + vault: + enabled: true + authMethod: kubernetes #enum: {kubernetes:default, Token} + accessInfo: + accessorPath: oidc/ + address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + roleName: mto + secretRef: + name: '' + namespace: '' + config: + ssoClient: vault ``` -> ⚠️ If you want to use a more complex regex pattern (for the `openshift.privilegedNamespaces` or `openshift.privilegedServiceAccounts` field), it is recommended that you test the regex pattern first - either locally or using a platform such as . - -## ArgoCD +### ArgoCD -### Namespace +[ArgoCD](https://argoproj.github.io/argo-cd/) is a declarative, GitOps continuous delivery tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. ArgoCD uses Kubernetes manifests and configures the applications on the cluster. -`argocd.namespace` is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant. - -### NamespaceResourceBlacklist - -```yaml -argocd: - namespaceResourceBlacklist: - - group: '' # all resource groups - kind: ResourceQuota - - group: '' - kind: LimitRange - - group: '' - kind: NetworkPolicy -``` - -`argocd.namespaceResourceBlacklist` prevents ArgoCD from syncing the listed resources from your GitOps repo. - -### ClusterResourceWhitelist +If `argocd` is configured on a cluster, then ArgoCD configuration can be enabled. ```yaml argocd: + enabled: bool clusterResourceWhitelist: - - group: tronador.stakater.com - kind: EnvironmentProvisioner -``` - -`argocd.clusterResourceWhitelist` allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo. - -## Provision - -```yaml -provision: - console: true - showback: true + - group: tronador.stakater.com + kind: EnvironmentProvisioner + namespaceResourceBlacklist: + - group: '' # all groups + kind: ResourceQuota + namespace: openshift-operators ``` -`provision.console:` Can be used to enable/disable console GUI for MTO. -`provision.showback:` Can be used to enable/disable showback feature on the console. - -Integration config will be managing the following resources required for console GUI: - -- `Showback` cronjob. -- `Keycloak` deployment. -- `MTO-OpenCost` operator. -- `MTO-Prometheus` operator. -- `MTO-Postgresql` stateful set. +- `argocd.clusterResourceWhitelist` allows ArgoCD to sync the listed cluster scoped resources from your GitOps repo. +- `argocd.namespaceResourceBlacklist` prevents ArgoCD from syncing the listed resources from your GitOps repo. +- `argocd.namespace` is an optional field used to specify the namespace where ArgoCD Applications and AppProjects are deployed. The field should be populated when you want to create an ArgoCD AppProject for each tenant. -Details on console GUI and showback can be found [here](../explanation/console.md) +### Vault -## RHSSO (Red Hat Single Sign-On) - -Red Hat Single Sign-On [RHSSO](https://access.redhat.com/products/red-hat-single-sign-on) is based on the Keycloak project and enables you to secure your web applications by providing Web single sign-on (SSO) capabilities based on popular standards such as SAML 2.0, OpenID Connect and OAuth 2.0. +[Vault](https://www.vaultproject.io/) is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API. -If `RHSSO` is configured on a cluster, then RHSSO configuration can be enabled. +If `vault` is configured on a cluster, then Vault configuration can be enabled. ```yaml -rhsso: +vault: enabled: true - realm: customer - endpoint: - secretReference: - name: auth-secrets - namespace: openshift-auth - url: https://iam-keycloak-auth.apps.prod.abcdefghi.kubeapp.cloud/ + authMethod: kubernetes #enum: {kubernetes:default, token} + accessInfo: + accessorPath: oidc/ + address: https://vault.apps.prod.abcdefghi.kubeapp.cloud/ + roleName: mto + secretRef: + name: '' + namespace: '' + config: + ssoClient: vault ``` -If enabled, then admins have to provide secret and URL of RHSSO. +If enabled, then admins have to specify the `authMethod` to be used for authentication. MTO supports two authentication methods: -- `secretReference.name:` Will contain the name of the secret. -- `secretReference.namespace:` Will contain the namespace of the secret. -- `realm:` Will contain the realm name which is configured for users. -- `url:` Will contain the URL of RHSSO. +- `kubernetes`: This is the default authentication method. It uses the Kubernetes authentication method to authenticate with Vault. +- `token`: This method uses a Vault token to authenticate with Vault. -## Vault +#### AuthMethod - Kubernetes -[Vault](https://www.vaultproject.io/) is used to secure, store and tightly control access to tokens, passwords, certificates, encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API. +If `authMethod` is set to `kubernetes`, then admins have to specify the following fields: -If `vault` is configured on a cluster, then Vault configuration can be enabled. +- `accessorPath:` Accessor Path within Vault to fetch SSO accessorID +- `address:` Valid Vault address reachable within cluster. +- `roleName:` Vault's Kubernetes authentication role +- `sso.clientName:` SSO client name. -```yaml -Vault: - enabled: true - accessorPath: oidc/ - address: 'https://vault.apps.prod.abcdefghi.kubeapp.cloud/' - roleName: mto - sso: - clientName: vault -``` +#### AuthMethod - Token -If enabled, then admins have to provide following details: +If `authMethod` is set to `token`, then admins have to specify the following fields: - `accessorPath:` Accessor Path within Vault to fetch SSO accessorID - `address:` Valid Vault address reachable within cluster. -- `roleName:` Vault's Kubernetes authentication role -- `sso.clientName:` SSO client name. +- `secretRef:` Secret containing Vault token. + - `name:` Name of the secret containing Vault token. + - `namespace:` Namespace of the secret containing Vault token. For more details around enabling Kubernetes auth in Vault, visit [here](https://developer.hashicorp.com/vault/docs/auth/kubernetes) @@ -423,3 +564,27 @@ path "identity/group/id/*" { capabilities = ["create", "read", "update", "patch", "delete", "list"] } ``` + +### Custom Pricing Model + +You can modify IntegrationConfig to customise the default pricing model. Here is what you need at `IntegrationConfig.Spec.components`: + +```yaml +components: + console: true # should be enabled + showback: true # should be enabled + # add below and override any default value + # you can also remove the ones you do not need + customPricingModel: + CPU: "0.031611" + spotCPU: "0.006655" + RAM: "0.004237" + spotRAM: "0.000892" + GPU: "0.95" + storage: "0.00005479452" + zoneNetworkEgress: "0.01" + regionNetworkEgress: "0.01" + internetNetworkEgress: "0.12" +``` + +After modifying your default IntegrationConfig in `multi-tenant-operator` namespace, a configmap named `opencost-custom-pricing` will be updated. You will be able to see updated pricing info in `mto-console`. diff --git a/content/crds-api-reference/quota.md b/content/crds-api-reference/quota.md index ec19d18b0..34cbdb73a 100644 --- a/content/crds-api-reference/quota.md +++ b/content/crds-api-reference/quota.md @@ -46,16 +46,20 @@ Bill then proceeds to create a tenant for Anna, while also linking the newly cre ```yaml kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@stakater.com quota: small - sandbox: false + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + namespaces: + sandboxes: + enabled: true EOF ``` @@ -94,17 +98,20 @@ Once the quota is created, Bill will create the tenant and set the quota field t ```yaml kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - quota: medium - sandbox: true + quota: small + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + namespaces: + sandboxes: + enabled: true EOF ``` @@ -132,16 +139,20 @@ Once the quota is created, Bill will create the tenant and set the quota field t ```yaml kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: - name: sigma + name: bluesky spec: - owners: - users: - - dave@aurora.org quota: small - sandbox: true + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + namespaces: + sandboxes: + enabled: true EOF ``` diff --git a/content/crds-api-reference/tenant.md b/content/crds-api-reference/tenant.md index ddb906dac..c0a309273 100644 --- a/content/crds-api-reference/tenant.md +++ b/content/crds-api-reference/tenant.md @@ -1,11 +1,9 @@ # Tenant -Cluster scoped resource: - -The smallest valid Tenant definition is given below (with just one field in its spec): +A minimal Tenant definition requires only a quota field, essential for limiting resource consumption: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: alpha @@ -13,118 +11,113 @@ spec: quota: small ``` -Here is a more detailed Tenant definition, explained below: +For a more comprehensive setup, a detailed Tenant definition includes various configurations: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: - name: alpha + name: tenant-sample spec: - owners: # optional - users: # optional - - dave@stakater.com - groups: # optional - - alpha - editors: # optional - users: # optional - - jack@stakater.com - viewers: # optional - users: # optional - - james@stakater.com - quota: medium # required - sandboxConfig: # optional - enabled: true # optional - private: true # optional - onDelete: # optional - cleanNamespaces: false # optional - cleanAppProject: true # optional - argocd: # optional - sourceRepos: # required - - https://github.com/stakater/gitops-config - appProject: # optional - clusterResourceWhitelist: # optional - - group: tronador.stakater.com - kind: Environment - namespaceResourceBlacklist: # optional - - group: "" - kind: ConfigMap - hibernation: # optional - sleepSchedule: 23 * * * * # required - wakeSchedule: 26 * * * * # required - namespaces: # optional - withTenantPrefix: # optional + quota: small + accessControl: + owners: + users: + - kubeadmin + groups: + - admin-group + editors: + users: + - devuser1 + - devuser2 + groups: + - dev-group + viewers: + users: + - viewuser + groups: + - view-group + hibernation: + # UTC time + sleepSchedule: "20 * * * *" + wakeSchedule: "40 * * * *" + namespaces: + sandboxes: + enabled: true + private: true + withoutTenantPrefix: + - analytics + - marketing + withTenantPrefix: - dev - - build - withoutTenantPrefix: # optional - - preview - commonMetadata: # optional - labels: # optional - stakater.com/team: alpha - annotations: # optional - openshift.io/node-selector: node-role.kubernetes.io/infra= - specificMetadata: # optional - - annotations: # optional - stakater.com/user: dave - labels: # optional - stakater.com/sandbox: true - namespaces: # optional - - alpha-dave-stakater-sandbox - templateInstances: # optional - - spec: # optional - template: networkpolicy # required - sync: true # optional - parameters: # optional - - name: CIDR_IP - value: "172.17.0.0/16" - selector: # optional - matchLabels: # optional - policy: network-restriction + - staging + onDeletePurgeNamespaces: true + metadata: + common: + labels: + common-label: common-value + annotations: + common-annotation: common-value + sandbox: + labels: + sandbox-label: sandbox-value + annotations: + sandbox-annotation: sandbox-value + specific: + - namespaces: + - tenant-sample-dev + labels: + specific-label: specific-dev-value + annotations: + specific-annotation: specific-dev-value + desc: "This is a sample tenant setup for the v1beta3 version." ``` -* Tenant has 3 kinds of `Members`. Each member type should have different roles assigned to them. These roles are gotten from the [IntegrationConfig's TenantRoles field](integration-config.md#tenantroles). You can customize these roles to your liking, but by default the following configuration applies: - * `Owners:` Users who will be owners of a tenant. They will have OpenShift admin-role assigned to their users, with additional access to create namespaces as well. - * `Editors:` Users who will be editors of a tenant. They will have OpenShift edit-role assigned to their users. - * `Viewers:` Users who will be viewers of a tenant. They will have OpenShift view-role assigned to their users. +## Access Control -* `Users` can be linked to the tenant by specifying there username in `owners.users`, `editors.users` and `viewers.users` respectively. +Structured access control is critical for managing roles and permissions within a tenant effectively. It divides users into three categories, each with customizable privileges. This design enables precise role-based access management. -* `Groups` can be linked to the tenant by specifying the group name in `owners.groups`, `editors.groups` and `viewers.groups` respectively. +These roles are obtained from [IntegrationConfig's TenantRoles field](integration-config.md#tenantroles). + +* `Owners`: Have full administrative rights, including resource management and namespace creation. Their roles are crucial for high-level management tasks. +* `Editors`: Granted permissions to modify resources, enabling them to support day-to-day operations without full administrative access. +* `Viewers`: Provide read-only access, suitable for oversight and auditing without the ability to alter resources. -* Tenant will have a `Quota` to limit resource consumption. +Users and groups are linked to these roles by specifying their usernames or group names in the respective fields under `owners`, `editors`, and `viewers`. -* `sandboxConfig` is used to configure the tenant user sandbox feature - * Setting `enabled` to *true* will create *sandbox namespaces* for owners and editors. - * Sandbox will follow the following naming convention **{TenantName}**-**{UserName}**-*sandbox*. - * In case of groups, the sandbox namespaces will be created for each member of the group. - * Setting `private` to *true* will make those sandboxes be only visible to the user they belong to. By default, sandbox namespaces are visible to all tenant members +## Quota + +The `quota` field sets the resource limits for the tenant, such as CPU and memory usage, to prevent any single tenant from consuming a disproportionate amount of resources. This mechanism ensures efficient resource allocation and fosters fair usage practices across all tenants. -* `onDelete` is used to tell Multi Tenant Operator what to do when a Tenant is deleted. - * `cleanNamespaces` if the value is set to **true** *MTO* deletes all *tenant namespaces* when a `Tenant` is deleted. Default value is **false**. - * `cleanAppProject` will keep the generated ArgoCD AppProject if the value is set to **false**. By default, the value is **true**. +For more information on quotas, please refer [here](./quota.md). + +## Namespaces + +Controls the creation and management of namespaces within the tenant: + +* `sandboxes`: + * When enabled, sandbox namespaces are created with the following naming convention - **{TenantName}**-**{UserName}**-*sandbox*. + * In case of groups, the sandbox namespaces will be created for each member of the group. + * Setting `private` to *true* will make the sandboxes visible only to the user they belong to. By default, sandbox namespaces are visible to all tenant members. -* `argocd` is required if you want to create an ArgoCD AppProject for the tenant. - * `sourceRepos` contain a list of repositories that point to your GitOps. - * `appProject` is used to set the `clusterResourceWhitelist` and `namespaceResourceBlacklist` resources. If these are also applied via `IntegrationConfig` then those applied via Tenant CR will have higher precedence for given Tenant. +* `withoutTenantPrefix`: Lists the namespaces to be created without automatically prefixing them with the tenant name, useful for shared or common resources. +* `withTenantPrefix`: Namespaces listed here will be prefixed with the tenant name, ensuring easy identification and isolation. +* `onDeletePurgeNamespaces`: Determines whether namespaces associated with the tenant should be deleted upon the tenant's deletion, enabling clean up and resource freeing. +* `metadata`: Configures metadata like labels and annotations that are applied to namespaces managed by the tenant: + * `common`: Applies specified labels and annotations across all namespaces within the tenant, ensuring consistent metadata for resources and workloads. + * `sandbox`: Special metadata for sandbox namespaces, which can include templated annotations or labels for dynamic information. + * We also support the use of a templating mechanism within annotations, specifically allowing the inclusion of the tenant's username through the placeholder `{{ TENANT.USERNAME }}`. This template can be utilized to dynamically insert the tenant's username value into annotations, for example, as `username: {{ TENANT.USERNAME }}`. + * `specific`: Allows applying unique labels and annotations to specified tenant namespaces, enabling custom configurations for particular workloads or environments. -* `hibernation` can be used to create a schedule during which the namespaces belonging to the tenant will be put to sleep. The values of the `sleepSchedule` and `wakeSchedule` fields must be a string in a cron format. +## Hibernation -* Namespaces can also be created via tenant CR by *specifying names* in `namespaces`. - * Multi Tenant Operator will append *tenant name* prefix while creating namespaces if the list of namespaces is under the `withTenantPrefix` field, so the format will be **{TenantName}**-**{Name}**. - * Namespaces listed under the `withoutTenantPrefix` will be created with the given name. Writing down namespaces here that already exist within the cluster are not allowed. - * `stakater.com/kind: {Name}` label will also be added to the namespaces. +`hibernation` allows for the scheduling of inactive periods for namespaces associated with the tenant, effectively putting them into a "sleep" mode. This capability is designed to conserve resources during known periods of inactivity. -* `commonMetadata` can be used to distribute common labels and annotations among tenant namespaces. - * `labels` distributes provided labels among all tenant namespaces - * `annotations` distributes provided annotations among all tenant namespaces +* Configuration for this feature involves two key fields, `sleepSchedule` and `wakeSchedule`, both of which accept strings formatted according to cron syntax. +* These schedules dictate when the namespaces will automatically transition into and out of hibernation, aligning resource usage with actual operational needs. -* `specificMetadata` can be used to distribute specific labels and annotations among specific tenant namespaces. - * `labels` distributes given labels among specific tenant namespaces - * `annotations` distributes given annotations among specific tenant namespaces - * `namespaces` consists a list of specific tenant namespaces across which the labels and annotations will be distributed +## Description -* Tenant automatically deploys `template` resource mentioned in `templateInstances` to matching tenant namespaces. - * `Template` resources are created in those `namespaces` which belong to a `tenant` and contain `matching labels`. - * `Template` resources are created in all `namespaces` of a `tenant` if `selector` field is empty. +`desc` provides a human-readable description of the tenant, aiding in documentation and at-a-glance understanding of the tenant's purpose and configuration. -> ⚠️ If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to `specificMetadata` followed by `commonMetadata` and in the end would be the ones applied from `openshift.project.labels`/`openshift.project.annotations` in `IntegrationConfig` +> ⚠️ If same label or annotation key is being applied using different methods provided, then the highest precedence will be given to `namespaces.metadata.specific` followed by `namespaces.metadata.common` and in the end would be the ones applied from `openshift.project.labels`/`openshift.project.annotations` in `IntegrationConfig` diff --git a/content/how-to-guides/configuring-multitenant-network-isolation.md b/content/how-to-guides/configuring-multitenant-network-isolation.md index 613b351a0..a381f7376 100644 --- a/content/how-to-guides/configuring-multitenant-network-isolation.md +++ b/content/how-to-guides/configuring-multitenant-network-isolation.md @@ -70,11 +70,11 @@ spec: privileged: namespaces: - default - - ^openshift-* - - ^kube-* + - ^openshift.* + - ^kube.* serviceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* + - ^system:serviceaccount:openshift.* + - ^system:serviceaccount:kube.* ``` Bill has added a new label `tenant-network-policy: "true"` in project section of IntegrationConfig, now MTO will add that label in all tenant projects. diff --git a/content/how-to-guides/distributing-secrets-using-sealed-secret-template.md b/content/how-to-guides/distributing-secrets-using-sealed-secret-template.md index d78eada01..b769aa2ae 100644 --- a/content/how-to-guides/distributing-secrets-using-sealed-secret-template.md +++ b/content/how-to-guides/distributing-secrets-using-sealed-secret-template.md @@ -34,39 +34,39 @@ For this, he can use the support for [common](../tutorials/tenant/assigning-meta Bill has to specify a label on namespaces in which he needs the secret. He can add it to all namespaces inside a tenant or some specific namespaces depending on the use case. ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha namespaces: + sandboxes: + enabled: false withTenantPrefix: - dev - build - prod - - # use this if you want to add label to some specific namespaces - specificMetadata: - - namespaces: - - test-namespace - labels: - distribute-image-pull-secret: true - - # use this if you want to add label to all namespaces under your tenant - commonMetadata: - labels: - distribute-image-pull-secret: true - + withoutTenantPrefix: [] + metadata: + specific: + - namespaces: + - bluesky-test-namespace + labels: + distribute-image-pull-secret: true + common: + labels: + distribute-image-pull-secret: true ``` Bill has added support for a new label `distribute-image-pull-secret: true"` for tenant projects/namespaces, now MTO will add that label depending on the used field. diff --git a/content/how-to-guides/enabling-openshift-dev-workspace.md b/content/how-to-guides/enabling-openshift-dev-workspace.md index bb2f3c912..e0c566114 100644 --- a/content/how-to-guides/enabling-openshift-dev-workspace.md +++ b/content/how-to-guides/enabling-openshift-dev-workspace.md @@ -19,31 +19,33 @@ DevWorkspaces require specific metadata on a namespace for it to work in it. Wit With Multi Tenant Operator (MTO), you can set `sandboxMetadata` like below to automate metadata for all sandboxes: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@acme.org - editors: - users: - - erik@acme.org - viewers: - users: - - john@acme.org quota: small - sandboxConfig: - enabled: true - private: false - - sandboxMetadata: - labels: - app.kubernetes.io/part-of: che.eclipse.org - app.kubernetes.io/component: workspaces-namespace - annotations: - che.eclipse.org/username: "{{ TENANT.USERNAME }}" + accessControl: + owners: + users: + - anna@acme.org + editors: + users: + - erik@acme.org + viewers: + users: + - john@acme.org + namespaces: + sandboxes: + enabled: true + private: false + metadata: + sandbox: + labels: + app.kubernetes.io/part-of: che.eclipse.org + app.kubernetes.io/component: workspaces-namespace + annotations: + che.eclipse.org/username: "{{ TENANT.USERNAME }}" ``` It will create sandbox namespaces and also apply the `sandboxMetadata` for owners and editors. Notice the template `{{ TENANT.USERNAME }}`, it will resolve the username as value of the corresponding annotation. For more info on templated value, see [here](../explanation/templated-metadata-values.md) @@ -66,11 +68,11 @@ spec: privileged: namespaces: - ^default$ - - ^openshift-* - - ^kube-* + - ^openshift.* + - ^kube.* serviceAccounts: - - ^system:serviceaccount:openshift-* - - ^system:serviceaccount:kube-* + - ^system:serviceaccount:openshift.* + - ^system:serviceaccount:kube.* - ^system:serviceaccount:stakater-actions-runner-controller:actions-runner-controller-runner-deployment$ rbac: tenantRoles: diff --git a/content/how-to-guides/keycloak.md b/content/how-to-guides/keycloak.md index 5a6ada3a0..ea46ab5b3 100644 --- a/content/how-to-guides/keycloak.md +++ b/content/how-to-guides/keycloak.md @@ -39,22 +39,23 @@ Now, at this point, a user will be authenticated to the MTO Console. But in orde * Open Tenant CR: In the OpenShift cluster, locate and open the Tenant Custom Resource (CR) that you wish to give access to. You will see a YAML file similar to the following example: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: arsenal spec: quota: small - owners: - users: + accessControl: + owners: + users: - gabriel@arsenal.com - groups: - - arsenal - editors: - users: + groups: + - arsenal + editors: + users: - hakimi@arsenal.com - viewers: - users: + viewers: + users: - neymar@arsenal.com ``` diff --git a/content/how-to-guides/mattermost.md b/content/how-to-guides/mattermost.md index aba11ae58..b83623c26 100644 --- a/content/how-to-guides/mattermost.md +++ b/content/how-to-guides/mattermost.md @@ -12,22 +12,24 @@ Bill wants some tenants to also have their own Mattermost Teams. To make sure th The label will enable the `mto-mattermost-integration-operator` to create and manage Mattermost Teams based on Tenants. ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: sigma labels: stakater.com/mattermost: 'true' spec: - owners: - users: - - user - editors: - users: - - user1 quota: medium - sandbox: false + accessControl: + owners: + users: + - user + editors: + users: + - user1 namespaces: + sandboxes: + enabled: false withTenantPrefix: - dev - build diff --git a/content/installation/openshift.md b/content/installation/openshift.md index a671157df..21174f2f6 100644 --- a/content/installation/openshift.md +++ b/content/installation/openshift.md @@ -8,6 +8,8 @@ This document contains instructions on installing, uninstalling and configuring 1. [Enabling Console](#enabling-console) +1. [License configuration](#license-configuration) + 1. [Uninstall](#uninstall-via-operatorhub-ui) ## Requirements @@ -139,6 +141,29 @@ spec: * Now the `InstallPlan` will be approved, and MTO console components will be installed. +## License Configuration + +We offer a free license with installation, and you can create max 2 [Tenants](../tutorials/tenant/create-tenant.md) with it. + +We offer a paid license as well. You need to have a configmap `license` created in MTO's namespace (multi-tenant-operator). To get this configmap, you can contact [`sales@stakater.com`](mailto:sales@stakater.com). It would look like this: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: license + namespace: multi-tenant-operator +data: + payload.json: | + { + "metaData": { + "tier" : "paid", + "company": "" + } + } + signature.base64.txt: +``` + ## Uninstall via OperatorHub UI You can uninstall MTO by following these steps: diff --git a/content/tutorials/tenant/assigning-metadata.md b/content/tutorials/tenant/assigning-metadata.md index 691b79956..d33ae192a 100644 --- a/content/tutorials/tenant/assigning-metadata.md +++ b/content/tutorials/tenant/assigning-metadata.md @@ -1,112 +1,120 @@ -# Assigning metadata +# Assigning Metadata in Tenant Custom Resources -## Assigning Common/Specific Metadata +In the v1beta3 version of the Tenant Custom Resource (CR), metadata assignment has been refined to offer granular control over labels and annotations across different namespaces associated with a tenant. This functionality enables precise and flexible management of metadata, catering to both general and specific needs. -### Distributing common labels and annotations to tenant namespaces via Tenant Custom Resource +## Distributing Common Labels and Annotations -Bill now wants to add labels/annotations to all the namespaces for a tenant. To create those labels/annotations Bill will just add them into `commonMetadata.labels`/`commonMetadata.annotations` field in the tenant CR. +To apply common labels and annotations across all namespaces within a tenant, the `namespaces.metadata.common` field in the Tenant CR is utilized. This approach ensures that essential metadata is uniformly present across all namespaces, supporting consistent identification, management, and policy enforcement. ```yaml kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha namespaces: withTenantPrefix: - dev - build - prod - commonMetadata: - labels: - app.kubernetes.io/managed-by: tenant-operator - app.kubernetes.io/part-of: tenant-alpha - annotations: - openshift.io/node-selector: node-role.kubernetes.io/infra= + metadata: + common: + labels: + app.kubernetes.io/managed-by: tenant-operator + app.kubernetes.io/part-of: tenant-alpha + annotations: + openshift.io/node-selector: node-role.kubernetes.io/infra= EOF + ``` -With the above configuration all tenant namespaces will now contain the mentioned labels and annotations. +By configuring the `namespaces.metadata.common` field as shown, all namespaces within the tenant will inherit the specified labels and annotations. -### Distributing specific labels and annotations to tenant namespaces via Tenant Custom Resource +## Distributing Specific Labels and Annotations -Bill now wants to add labels/annotations to specific namespaces for a tenant. To create those labels/annotations Bill will just add them into `specificMetadata.labels`/`specificMetadata.annotations` and specific namespaces in `specificMetadata.namespaces` field in the tenant CR. +For scenarios requiring targeted application of labels and annotations to specific namespaces, the Tenant CR's `namespaces.metadata.specific` field is designed. This feature enables the assignment of unique metadata to designated namespaces, accommodating specialized configurations and requirements. ```yaml kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandboxConfig: - enabled: true + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha namespaces: withTenantPrefix: - dev - build - prod - specificMetadata: - - namespaces: - - bluesky-anna-aurora-sandbox - labels: - app.kubernetes.io/is-sandbox: true - annotations: - openshift.io/node-selector: node-role.kubernetes.io/worker= + metadata: + specific: + - namespaces: + - bluesky-dev + labels: + app.kubernetes.io/is-sandbox: "true" + annotations: + openshift.io/node-selector: node-role.kubernetes.io/worker= EOF ``` -With the above configuration all tenant namespaces will now contain the mentioned labels and annotations. +This configuration directs the specific labels and annotations solely to the enumerated namespaces, enabling distinct settings for particular environments. -## Assigning metadata to all sandboxes +## Assigning Metadata to Sandbox Namespaces -Bill can choose to apply metadata to sandbox namespaces only by using `sandboxMetadata` property of Tenant CR like below: +To specifically address *sandbox namespaces* within the tenant, the `namespaces.metadata.sandbox` property of the Tenant CR is employed. This section allows for the distinct management of sandbox namespaces, enhancing security and differentiation in development or testing environments. ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandboxConfig: - enabled: true - private: true - sandboxMetadata: # metadata for all sandbox namespaces - labels: - app.kubernetes.io/part-of: che.eclipse.org - annotations: - che.eclipse.org/username: "{{ TENANT.USERNAME }}" # templated placeholder + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: true + private: true + metadata: + sandbox: + labels: + app.kubernetes.io/part-of: che.eclipse.org + annotations: + che.eclipse.org/username: "{{ TENANT.USERNAME }}" # templated placeholder ``` -We are using a templated annotation here. See more on supported templated values for labels and annotations for specific MTO CRs [here](../../explanation/templated-metadata-values.md) +This setup ensures that all sandbox namespaces receive the designated metadata, with support for templated values, such as **{{ TENANT.USERNAME }}**, allowing dynamic customization based on the tenant or user context. + +These enhancements in metadata management within the `v1beta3` version of the Tenant CR provide comprehensive and flexible tools for labeling and annotating namespaces, supporting a wide range of organizational, security, and operational objectives. diff --git a/content/tutorials/tenant/create-sandbox.md b/content/tutorials/tenant/create-sandbox.md index 40bb8560d..488657837 100644 --- a/content/tutorials/tenant/create-sandbox.md +++ b/content/tutorials/tenant/create-sandbox.md @@ -1,34 +1,38 @@ # Create Sandbox Namespaces for Tenant Users -## Assigning Users Sandbox Namespace +Sandbox namespaces offer a personal development and testing space for users within a tenant. This guide covers how to enable and configure sandbox namespaces for tenant users, along with setting privacy and applying metadata specifically for these sandboxes. -Bill assigned the ownership of `bluesky` to `Anna` and `Anthony`. Now if the users want sandboxes to be made for them, they'll have to ask `Bill` to enable `sandbox` functionality. +## Enabling Sandbox Namespaces -To enable that, Bill will just set `enabled: true` within the `sandboxConfig` field +Bill has assigned the ownership of the tenant bluesky to Anna and Anthony. To provide them with their sandbox namespaces, he must enable the sandbox functionality in the tenant's configuration. + +To enable sandbox namespaces, Bill updates the Tenant Custom Resource (CR) with sandboxes.enabled: true: ```yaml kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandboxConfig: - enabled: true + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: true EOF ``` -With the above configuration `Anna` and `Anthony` will now have new sandboxes created +This configuration automatically generates sandbox namespaces for Anna, Anthony, and even John (as an editor) with the naming convention `--sandbox`. ```bash kubectl get namespaces @@ -38,35 +42,37 @@ bluesky-anthony-aurora-sandbox Active 5d5h bluesky-john-aurora-sandbox Active 5d5h ``` -If Bill wants to make sure that only the sandbox owner can view his sandbox namespace, he can achieve this by setting `private: true` within the `sandboxConfig` filed. +### Creating Private Sandboxes -## Create Private Sandboxes - -Bill assigned the ownership of `bluesky` to `Anna` and `Anthony`. Now if the users want sandboxes to be made for them, they'll have to ask `Bill` to enable `sandbox` functionality. The Users also want to make sure that the sandboxes that are created for them are also only visible to the user they belong to. To enable that, Bill will just set `enabled: true` and `private: true` within the `sandboxConfig` field +To address privacy concerns where users require their sandbox namespaces to be visible only to themselves, Bill can set the `sandboxes.private: true` in the Tenant CR: ```yaml kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandboxConfig: - enabled: true - private: true + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: true + private: true EOF ``` +With `private: true`, each sandbox namespace is accessible and visible only to its designated user, enhancing privacy and security. + With the above configuration `Anna` and `Anthony` will now have new sandboxes created ```bash @@ -85,34 +91,41 @@ NAME STATUS AGE bluesky-anna-aurora-sandbox Active 5d5h ``` -## Set metadata on sandbox namespaces +## Applying Metadata to Sandbox Namespaces -If you want to have a common metadata on all sandboxes, you can add `sandboxMetadata` to Tenant like below: +For uniformity or to apply specific policies, Bill might need to add common metadata, such as labels or annotations, to all sandbox namespaces. This is achievable through the `namespaces.metadata.sandbox` configuration: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +kubectl apply -f - << EOF +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandboxConfig: - enabled: true - private: true - sandboxMetadata: - labels: - app.kubernetes.io/part-of: che.eclipse.org - annotations: - che.eclipse.org/username: "{{ TENANT.USERNAME }}" # templated placeholder + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: true + private: true + metadata: + sandbox: + labels: + app.kubernetes.io/part-of: che.eclipse.org + annotations: + che.eclipse.org/username: "{{ TENANT.USERNAME }}" +EOF ``` -Note: In above Tenant, we have used a templated annotation value `"{{ TENANT.USERNAME }}"`. It will resolve to user of the respective sandbox namespace. For more info on it, see [here](../../explanation/templated-metadata-values.md) +The templated annotation "{{ TENANT.USERNAME }}" dynamically inserts the username of the sandbox owner, personalizing the sandbox environment. This capability is particularly useful for integrating with other systems or applications that might utilize this metadata for configuration or access control. + +Through the examples demonstrated, Bill can efficiently manage sandbox namespaces for tenant users, ensuring they have the necessary resources for development and testing while maintaining privacy and organizational policies. diff --git a/content/tutorials/tenant/create-tenant.md b/content/tutorials/tenant/create-tenant.md index a15579ef0..f1de2b7a2 100644 --- a/content/tutorials/tenant/create-tenant.md +++ b/content/tutorials/tenant/create-tenant.md @@ -1,31 +1,39 @@ # Creating a Tenant -Bill is a cluster admin who receives a new request from Aurora Solutions CTO asking for a new tenant for Anna's team. +Bill, a cluster admin, has been tasked by the CTO of Nordmart to set up a new tenant for Anna's team. Following the request, Bill proceeds to create a new tenant named bluesky in the Kubernetes cluster. -Bill creates a new tenant called `bluesky` in the cluster: +## Setting Up the Tenant + +To establish the tenant, Bill crafts a Tenant Custom Resource (CR) with the necessary specifications: ```yaml kubectl create -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small - sandbox: false + accessControl: + owners: + users: + - anna@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: false EOF ``` -Bill checks if the new tenant is created: +In this configuration, Bill specifies as the owner, giving her full administrative rights over the tenant. The editor role is assigned to and the group alpha, providing them with editing capabilities within the tenant's scope. + +## Verifying the Tenant Creation + +After creating the tenant, Bill checks its status to confirm it's active and operational: ```bash kubectl get tenants.tenantoperator.stakater.com bluesky @@ -33,14 +41,22 @@ NAME STATE AGE bluesky Active 3m ``` -Anna can now log in to the cluster and check if she can create namespaces +This output indicates that the tenant bluesky is successfully created and in an active state. + +## Checking User Permissions + +To ensure the roles and permissions are correctly assigned, Anna logs into the cluster to verify her capabilities: + +**Namespace Creation:** ```bash kubectl auth can-i create namespaces yes ``` -However, cluster resources are not accessible to Anna +Anna is confirmed to have the ability to create namespaces within the tenant's scope. + +**Cluster Resources Access:** ```bash kubectl auth can-i get namespaces @@ -50,9 +66,50 @@ kubectl auth can-i get persistentvolumes no ``` -Including the `Tenant` resource +As expected, Anna does not have access to broader cluster resources outside the tenant's confines. + +**Tenant Resource Access:** ```bash kubectl auth can-i get tenants.tenantoperator.stakater.com no ``` + +Access to the Tenant resource itself is also restricted, aligning with the principle of least privilege. + +## Adding Multiple Owners to a Tenant + +Later, if there's a need to grant administrative privileges to another user, such as Anthony, Bill can easily update the tenant's configuration to include multiple owners: + +```yaml +kubectl apply -f - << EOF +apiVersion: tenantoperator.stakater.com/v1beta3 +kind: Tenant +metadata: + name: bluesky +spec: + quota: small + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha + namespaces: + sandboxes: + enabled: false +EOF +``` + +With this update, both Anna and Anthony can administer the tenant bluesky, including the creation of namespaces: + +```bash +kubectl auth can-i create namespaces +yes +``` + +This flexible approach allows Bill to manage tenant access control efficiently, ensuring that the team's operational needs are met while maintaining security and governance standards. diff --git a/content/tutorials/tenant/creating-namespaces.md b/content/tutorials/tenant/creating-namespaces.md index c045a2986..32b21ffd0 100644 --- a/content/tutorials/tenant/creating-namespaces.md +++ b/content/tutorials/tenant/creating-namespaces.md @@ -1,26 +1,31 @@ -# Creating Namespaces +# Creating Namespaces through Tenant Custom Resource -## Creating Namespaces via Tenant Custom Resource +Bill, tasked with structuring namespaces for different environments within a tenant, utilizes the Tenant Custom Resource (CR) to streamline this process efficiently. Here's how Bill can orchestrate the creation of `dev`, `build`, and `production` environments for the tenant members directly through the Tenant CR. -Bill now wants to create namespaces for `dev`, `build` and `production` environments for the tenant members. To create those namespaces Bill will just add those names within the `namespaces` field in the tenant CR. If Bill wants to append the tenant name as a prefix in namespace name, then he can use `namespaces.withTenantPrefix` field. Else he can use `namespaces.withoutTenantPrefix` for namespaces for which he does not need tenant name as a prefix. +## Strategy for Namespace Creation + +To facilitate the environment setup, Bill decides to categorize the namespaces based on their association with the tenant's name. He opts to use the `namespaces.withTenantPrefix` field for namespaces that should carry the tenant name as a prefix, enhancing clarity and organization. For namespaces that do not require a tenant name prefix, Bill employs the `namespaces.withoutTenantPrefix` field. + +Here's how Bill configures the Tenant CR to create these namespaces: ```yaml kubectl apply -f - << EOF -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org - editors: - users: - - john@aurora.org - groups: - - alpha quota: small + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org + editors: + users: + - john@aurora.org + groups: + - alpha namespaces: withTenantPrefix: - dev @@ -30,17 +35,19 @@ spec: EOF ``` -With the above configuration tenant members will now see new namespaces have been created. +This configuration ensures the creation of the desired namespaces, directly correlating them with the bluesky tenant. + +Upon applying the above configuration, Bill and the tenant members observe the creation of the following namespaces: ```bash kubectl get namespaces NAME STATUS AGE -bluesky-dev Active 5d5h -bluesky-build Active 5d5h -prod Active 5d5h +bluesky-dev Active 5m +bluesky-build Active 5m +prod Active 5m ``` -Anna as the tenant owner can create new namespaces for her tenant. +Anna, as a tenant owner, gains the capability to further customize or create new namespaces within her tenant's scope. For example, creating a bluesky-production namespace with the necessary tenant label: ```yaml apiVersion: v1 @@ -51,55 +58,27 @@ metadata: stakater.com/tenant: bluesky ``` -> ⚠️ Anna is required to add the tenant label `stakater.com/tenant: bluesky` which contains the name of her tenant `bluesky`, while creating the namespace. If this label is not added or if Anna does not belong to the `bluesky` tenant, then Multi Tenant Operator will not allow the creation of that namespace. +> ⚠️ It's crucial for Anna to include the tenant label `tenantoperator.stakater.com/tenant: bluesky` to ensure the namespace is recognized as part of the bluesky tenant. Failure to do so, or if Anna is not associated with the bluesky tenant, will result in Multi Tenant Operator (MTO) denying the namespace creation. -When Anna creates the namespace, MTO assigns Anna and other tenant members the roles based on their user types, such as a tenant owner getting the OpenShift `admin` role for that namespace. +Following the creation, the MTO dynamically assigns roles to Anna and other tenant members according to their designated user types, ensuring proper access control and operational capabilities within these namespaces. -As a tenant owner, Anna is able to create namespaces. +## Incorporating Existing Namespaces into the Tenant via ArgoCD -If you have enabled [ArgoCD Multitenancy](../../how-to-guides/enabling-multi-tenancy-argocd.md), our preferred solution is to create tenant namespaces by using [Tenant](../../crds-api-reference/tenant.md) spec to avoid syncing issues in ArgoCD console during namespace creation. +For teams practicing GitOps, existing namespaces can be seamlessly integrated into the [Tenant](../../crds-api-reference/tenant.md) structure by appending the tenant label to the namespace's manifest within the GitOps repository. This approach allows for efficient, automated management of namespace affiliations and access controls, ensuring a cohesive tenant ecosystem. -## Add Existing Namespaces to Tenant via GitOps +### Add Existing Namespaces to Tenant via GitOps Using GitOps as your preferred development workflow, you can add existing namespaces for your tenants by including the tenant label. To add an existing namespace to your tenant via GitOps: -1. First, migrate your namespace resource to your “watched” git repository -1. Edit your namespace `yaml` to include the tenant label -1. Tenant label follows the naming convention `stakater.com/tenant: ` -1. Sync your GitOps repository with your cluster and allow changes to be propagated -1. Verify that your Tenant users now have access to the namespace - -For example, If Anna, a tenant owner, wants to add the namespace `bluesky-dev` to her tenant via GitOps, after migrating her namespace manifest to a “watched repository” - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: bluesky-dev -``` - -She can then add the tenant label - -```yaml - ... - labels: - stakater.com/tenant: bluesky -``` - -Now all the users of the `Bluesky` tenant now have access to the existing namespace. - -Additionally, to remove namespaces from a tenant, simply remove the tenant label from the namespace resource and sync your changes to your cluster. - -## Remove Namespaces from your Cluster via GitOps +1. Migrate the namespace resource to the GitOps-monitored repository +1. Amend the namespace manifest to include the tenant label: tenantoperator.stakater.com/tenant: . +1. Synchronize the GitOps repository with the cluster to propagate the changes +1. Validate that the tenant users now have appropriate access to the integrated namespace - GitOps is a quick and efficient way to automate the management of your K8s resources. +## Removing Namespaces via GitOps -To remove namespaces from your cluster via GitOps; +To disassociate or remove namespaces from the cluster through GitOps, the namespace configuration should be eliminated from the GitOps repository. Additionally, detaching the namespace from any ArgoCD-managed applications by removing the `app.kubernetes.io/instance` label ensures a clean removal without residual dependencies. -- Remove the `yaml` file containing your namespace configurations from your “watched” git repository. -- ArgoCD automatically sets the `[app.kubernetes.io/instance](http://app.kubernetes.io/instance)` label on resources it manages. It uses this label it to select resources which inform the basis of an app. To remove a namespace from a managed ArgoCD app, remove the ArgoCD label `app.kubernetes.io/instance` from the namespace manifest. -- You can edit your namespace manifest through the OpenShift Web Console or with the OpenShift command line tool. -- Now that you have removed your namespace manifest from your watched git repository, and from your managed ArgoCD apps, sync your git repository and allow your changes be propagated. -- Verify that your namespace has been deleted. +Synchronizing the repository post-removal finalizes the deletion process, effectively managing the lifecycle of namespaces within a tenant-operated Kubernetes environment. diff --git a/content/tutorials/tenant/deleting-tenant.md b/content/tutorials/tenant/deleting-tenant.md index e5c0351f8..8bcd40dea 100644 --- a/content/tutorials/tenant/deleting-tenant.md +++ b/content/tutorials/tenant/deleting-tenant.md @@ -1,30 +1,49 @@ -# Deleting a Tenant +# Deleting a Tenant While Preserving Resources -## Retaining tenant namespaces and AppProject when a tenant is being deleted +When managing tenant lifecycles within Kubernetes, certain scenarios require the deletion of a tenant without removing associated namespaces or ArgoCD AppProjects. This ensures that resources and configurations tied to the tenant remain intact for archival or transition purposes. -Bill now wants to delete tenant `bluesky` and wants to retain all namespaces and AppProject of the tenant. To retain the namespaces Bill will set `spec.onDelete.cleanNamespaces`, and `spec.onDelete.cleanAppProjects` to `false`. +## Configuration for Retaining Resources + +Bill decides to decommission the bluesky tenant but needs to preserve all related namespaces for continuity. To achieve this, he adjusts the Tenant Custom Resource (CR) to prevent the automatic cleanup of these resources upon tenant deletion. ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +kubectl apply -f - << EOF +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: bluesky spec: - owners: - users: - - anna@aurora.org - - anthony@aurora.org quota: small - sandboxConfig: - enabled: true + accessControl: + owners: + users: + - anna@aurora.org + - anthony@aurora.org namespaces: + sandboxes: + enabled: true withTenantPrefix: - dev - build - prod - onDelete: - cleanNamespaces: false - cleanAppProject: false + onDeletePurgeNamespaces: false +EOF +``` + +With the `onDeletePurgeNamespaces` fields set to false, Bill ensures that the deletion of the bluesky tenant does not trigger the removal of its namespaces. This setup is crucial for cases where the retention of environment setups and deployments is necessary post-tenant deletion. + +### Default Behavior + +It's important to note the default behavior of the Tenant Operator regarding resource cleanup: + +Namespaces: By default, `onDeletePurgeNamespaces` is set to false, implying that namespaces are not automatically deleted with the tenant unless explicitly configured. + +## Deleting the Tenant + +Once the Tenant CR is configured as desired, Bill can proceed to delete the bluesky tenant: + +```bash +kubectl delete tenant bluesky ``` -With the above configuration all tenant namespaces and AppProject will not be deleted when tenant `bluesky` is deleted. By default, the value of `spec.onDelete.cleanNamespaces` is also `false` and `spec.onDelete.cleanAppProject` is `true` +This command removes the tenant resource from the cluster while leaving the specified namespaces untouched, adhering to the configured `onDeletePurgeNamespaces` policies. This approach provides flexibility in managing the lifecycle of tenant resources, catering to various operational strategies and compliance requirements. diff --git a/content/tutorials/tenant/tenant-hibernation.md b/content/tutorials/tenant/tenant-hibernation.md index bb0e59cd8..36b0a8593 100644 --- a/content/tutorials/tenant/tenant-hibernation.md +++ b/content/tutorials/tenant/tenant-hibernation.md @@ -1,6 +1,8 @@ # Hibernating a Tenant -## Hibernating Namespaces +Implementing hibernation for tenants' namespaces efficiently manages cluster resources by temporarily reducing workload activities during off-peak hours. This guide demonstrates how to configure hibernation schedules for tenant namespaces, leveraging Tenant and ResourceSupervisor for precise control. + +## Configuring Hibernation for Tenant Namespaces You can manage workloads in your cluster with MTO by implementing a hibernation schedule for your tenants. Hibernation downsizes the running Deployments and StatefulSets in a tenant’s namespace according to a defined cron schedule. You can set a hibernation schedule for your tenants by adding the ‘spec.hibernation’ field to the tenant's respective Custom Resource. @@ -96,26 +98,23 @@ Bill is a cluster administrator who wants to free up unused cluster resources at First, Bill creates a tenant with the `hibernation` schedules mentioned in the spec, or adds the hibernation field to an existing tenant: ```yaml -apiVersion: tenantoperator.stakater.com/v1beta2 +apiVersion: tenantoperator.stakater.com/v1beta3 kind: Tenant metadata: name: sigma spec: hibernation: - sleepSchedule: 0 20 * * 1-5 - wakeSchedule: 0 8 * * 1-5 + sleepSchedule: "0 20 * * 1-5" # Sleep at 8 PM on weekdays + wakeSchedule: "0 8 * * 1-5" # Wake at 8 AM on weekdays owners: users: - - user - editors: - users: - - user1 + - user@example.com quota: medium namespaces: withoutTenantPrefix: - - build - - stage - dev + - stage + - build ``` The schedules above will put all the `Deployments` and `StatefulSets` within the tenant's namespaces to sleep, by reducing their pod count to 0 at 8 PM every weekday. At 8 AM on weekdays, the namespaces will then wake up by restoring their applications' previous pod counts. diff --git a/theme_override/mkdocs.yml b/theme_override/mkdocs.yml index 4e56b9e4d..41eb1dfd5 100644 --- a/theme_override/mkdocs.yml +++ b/theme_override/mkdocs.yml @@ -55,6 +55,7 @@ nav: - explanation/templated-metadata-values.md - explanation/multi-tenancy-vault.md - CRDs API Reference: + - crds-api-reference/extensions.md - crds-api-reference/integration-config.md - crds-api-reference/quota.md - crds-api-reference/tenant.md