diff --git a/docs/deprecated/clusters/palette-virtual-clusters/add-virtual-cluster-to-host-cluster.md b/docs/deprecated/clusters/palette-virtual-clusters/add-virtual-cluster-to-host-cluster.md
index 319b8aea90..bb0ae65ede 100644
--- a/docs/deprecated/clusters/palette-virtual-clusters/add-virtual-cluster-to-host-cluster.md
+++ b/docs/deprecated/clusters/palette-virtual-clusters/add-virtual-cluster-to-host-cluster.md
@@ -100,8 +100,9 @@ These requirements apply to an Ingress endpoint:
- The Host Cluster must specify a Host domain name service (DNS) Pattern, for example: `*.starship.te.spectrocloud.com`
- To create a valid Host DNS Pattern, you must deploy the NGINX Ingress Controller on the Host Cluster with SSL passthrough
- enabled. This allows transport layer security (TLS) termination to occur at the virtual cluster's Kubernetes API server.
+ To create a valid Host DNS Pattern, you must deploy the NGINX Ingress Controller on the Host Cluster with SSL
+ passthrough enabled. This allows transport layer security (TLS) termination to occur at the virtual cluster's
+ Kubernetes API server.
- A wildcard DNS record must be configured, which maps the Host DNS Pattern to the load balancer associated with the
NGINX Ingress Controller.
diff --git a/docs/docs-content/clusters/cluster-management/node-labels.md b/docs/docs-content/clusters/cluster-management/node-labels.md
new file mode 100644
index 0000000000..97530f9cac
--- /dev/null
+++ b/docs/docs-content/clusters/cluster-management/node-labels.md
@@ -0,0 +1,145 @@
+---
+sidebar_label: "Node Labels"
+title: "Node Labels"
+description: "Learn how to apply node labels to Palette clusters."
+hide_table_of_contents: false
+sidebar_position: 95
+tags: ["clusters", "cluster management"]
+---
+
+Node labels provide pods the ability to specify which nodes they should be scheduled on. This ability can be useful in
+scenarios where pods should be co-located or executed on dedicated nodes. A common use case of node labels is to ensure
+that certain workloads only execute on certain hardware configurations. Labels are optional configurations, as the
+scheduler will automatically place pods across nodes.
+
+:::tip
+
+You can think of node labels as having the opposite effect to [Taints and Tolerations](./taints.md). Taints allow you to
+mark nodes as not accepting certain pods, while node labels allow you to specify that your pods should only be scheduled
+on certain nodes.
+
+:::
+
+Palette allows you to apply node labels during cluster provisioning. Once the cluster is in a healthy state, labels can
+be modified on the **Nodes** tab of the cluster details page.
+
+This guide covers the Palette UI flow.
+
+:::info
+
+Node labels can also be applied to node pools using our
+[Terraform provider](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs).
+
+:::
+
+## Prerequisites
+
+- A [Palette](https://console.spectrocloud.com) account with the permissions to create cluster profiles and manage
+ clusters. Refer to the [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md)
+ guide for more information.
+- [kubectl](https://kubernetes.io/docs/reference/kubectl/) or [K9s](https://k9scli.io/) installed locally.
+
+## Enablement
+
+1. Log in to [Palette](https://console.spectrocloud.com).
+
+2. Navigate to the left **Main Menu** and select **Profiles**.
+
+3. Create a cluster profile to deploy to your environment. Refer to the
+ [Create a Full Profile](../../profiles/cluster-profiles/create-cluster-profiles/create-full-profile.md) guide for
+ more information.
+
+4. Add a manifest to your cluster profile with a custom workload of your choice. Refer to the
+ [Add a Manifest](../../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-manifest-addon.md)
+ for additional guidance.
+
+5. Add a node selector to the pod specification of your manifest. Refer to the
+ [Assign Pods to Nodes](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) official
+ documentation page for more details.
+
+ ```yaml
+ nodeSelector:
+ key1: value1
+ ```
+
+ :::info
+
+ You can also specify a node by name by using the `nodeName: name` option on your pod specification. We recommend
+ using a node selector, as it provides a more scalable and robust solution.
+
+ When using packs or Helm charts, the `nodeSelector` or `nodeName` options can only be specified if they are exposed
+ for configuration in the `values.yaml` file.
+
+ :::
+
+6. Save the changes made to your cluster profile.
+
+7. Navigate to the left **Main Menu** and select **Clusters**.
+
+8. Click on **Add New Cluster**.
+
+9. Fill in the **Basic Information** for your cluster and click **Next**.
+
+10. On the **Cluster Profile** tab, select the cluster profile you previously created. Click **Next**.
+
+11. Select a **Subscription**, **Region**, and **SSH Key** on the **Cluster Config** tab. Click **Next**.
+
+12. On the **Nodes Config** tab, configure your control plane pool and worker pools by providing the instance type,
+ availability zones and disk size.
+
+13. The control plane pool and worker pool provide the **Additional Labels (Optional)** section. Palette accepts labels
+ in the `key:value` format. Fill in the labels corresponding to the values provided in your pod specification node
+ selector. Click on **Next**.
+
+ ![Screenshot of adding node labels during cluster creation](/clusters_cluster-management_node-labels_cluster-creation-labels.webp)
+
+ :::info
+
+ Node labels can also be updated on a deployed cluster by editing a worker node pool from the **Nodes** tab of the
+ cluster details page.
+
+ :::
+
+14. Accept the default settings on the **Cluster Settings** tab and click on **Validate**.
+
+15. Click on **Finish Configuration** and deploy your cluster.
+
+ :::further
+
+ Refer to our [Deploy a Cluster](../../tutorials/cluster-deployment/public-cloud/deploy-k8s-cluster.md) tutorial for
+ detailed guidance on how to deploy a cluster with Palette using Amazon Web Services (AWS), Microsoft Azure, or
+ Google Cloud Platform (GCP) cloud providers.
+
+ :::
+
+## Validate
+
+You can follow these steps to validate that your node labels are applied successfully.
+
+1. Log in to [Palette](https://console.spectrocloud.com).
+
+2. Navigate to the left **Main Menu** and select **Clusters**.
+
+3. Select the cluster you deployed, and download the [kubeconfig](./kubeconfig.md) file.
+
+ ![Screenshot of kubeconfig file download](/clusters_cluster-management_node-labels_kubeconfig-download.webp)
+
+4. Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded.
+
+ ```
+ export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig
+ ```
+
+5. Confirm the cluster deployment process has scheduled your pods as expected. Remember that only pods will be scheduled
+ on nodes with labels matching their node selectors.
+
+ ```
+ kubectl get pods --all-namespaces --output wide --watch
+ ```
+
+ :::tip
+
+ For a more user-friendly experience, consider using [K9s](https://k9scli.io/) or a similar tool to explore your
+ cluster workloads.
+
+ :::
diff --git a/docs/docs-content/clusters/cluster-management/node-pool.md b/docs/docs-content/clusters/cluster-management/node-pool.md
index 714a06f984..8ecb369c9d 100644
--- a/docs/docs-content/clusters/cluster-management/node-pool.md
+++ b/docs/docs-content/clusters/cluster-management/node-pool.md
@@ -59,8 +59,8 @@ settings may not be available.
| **Node pool name** | A descriptive name for the node pool. |
| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. |
| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. |
-| **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Overview on Labels](taints.md#labels). |
-| **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Overview on Taints](taints.md#taints). |
+| **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Node Labels](./node-labels.md). |
+| **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Taints and Tolerations](./taints.md) guide. |
| **Availability Zones** | The Availability Zones from which to select available servers for deployment. If you select multiple zones, Palette will deploy servers evenly across them as long as sufficient servers are available to do so. |
| **Disk Size** | Give the required storage size. |
@@ -71,8 +71,8 @@ settings may not be available.
| **Node pool name** | A descriptive name for the worker pool. |
| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. |
| **Node repave interval** | The time interval in seconds between repaves. The default value is 0 seconds. |
-| **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Overview on Labels](taints.md#labels). |
-| **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Overview on Taints](taints.md#apply-taints-to-nodes). |
+| **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Node Labels](./node-labels.md) guide. |
+| **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Taints and Tolerations](./taints.md) guide. |
| **Rolling update** | Apply the update policy. **Expand first** launches new nodes and then terminates old notes. **Contract first** terminates old nodes and then launches new ones. |
| **Instance Option** | AWS options for compute capacity. **On Demand** gives you full control over the instance lifecycle without long-term commitment. **Spot** allows the use of spare EC2 capacity at a discount but which can be reclaimed if needed. |
| **Instance Type** | The compute size. |
diff --git a/docs/docs-content/clusters/cluster-management/taints.md b/docs/docs-content/clusters/cluster-management/taints.md
index e2b4e1a2f4..f1228a786f 100644
--- a/docs/docs-content/clusters/cluster-management/taints.md
+++ b/docs/docs-content/clusters/cluster-management/taints.md
@@ -1,64 +1,155 @@
---
-sidebar_label: "Node Labels and Taints"
-title: "Node Labels and Taints"
-description:
- "Learn how to apply labels and taints to nodes in a cluster, and how to specify Namespace labels and annotations to
- Add-on packs and packs for Container Storage Interface (CSI) and Container Network Interface (CNI) drivers."
+sidebar_label: "Taints and Tolerations"
+title: "Taints and Tolerations"
+description: "Learn how to apply taints and tolerations Palette clusters."
hide_table_of_contents: false
sidebar_position: 100
tags: ["clusters", "cluster management"]
---
-## Taints
+Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain
+pods. A common use case of taints is to prevent pods from being scheduled on nodes undergoing maintenance. Tolerations
+are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, nodes do not accept
+any pods that do not tolerate the taints.
-Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement.
-Taints are the opposite -- they allow a node to repel a set of pods.
+:::tip
-Tolerations are applied to pods and allow (but do not require) the pods to schedule onto nodes with matching taints.
+You can think of taints as having the opposite effect to [Node Labels](./node-labels.md). Taints allow you to mark nodes
+as not accepting certain pods, while node labels allow you to specify that your pods should only be scheduled on certain
+nodes.
-Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints
-are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
+:::
-Palette enables Taints to be applied to a node pool to restrict a set of intolerant pods getting scheduled to a Palette
-node pool. Taints can be applied during initial provisioning of the cluster and modified later.
+Palette allows you to apply taints during cluster provisioning. Once the cluster is in a healthy state, taints can be
+modified on the **Nodes** tab of the cluster details page.
-### Apply Taints to Nodes
+This guide covers the Palette UI flow.
-Taints can be applied to worker pools while creating a new cluster from the node pool configuration page as follows:
+:::info
-- Enable the “Taint” select button.
-- To apply the Taint, set the following parameters:
- - Key: Custom key for the Taint
- - Value: Custom value for the Taint key
- - Effect: The effects define what will happen to the pods that do not tolerate a Taint. There are 3 Taint effects:
- - NoSchedule: A pod that cannot tolerate the node Taint, should not be scheduled to the node.
- - PreferNoSchedule: The system will avoid placing a non-tolerant pod to the tainted node but is not guaranteed.
- - NoExecute: New pods will not be scheduled on the node, and existing pods on the node, if any will be evicted if
- they do not tolerate the Taint.
+Taints can also be applied to node pools using the Spectro Cloud
+[Terraform provider](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs).
-Eg: Key = key1; Value = value1; Effect = NoSchedule
+:::
-Taints can also be updated on a running cluster by editing a worker node pool from the 'Nodes' tab of the cluster
-details page.
+## Prerequisites
-## Labels
+- A [Palette](https://console.spectrocloud.com) account with the permissions to create cluster profiles and manage
+ clusters. Refer to the [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md)
+ guide for more information.
+- [kubectl](https://kubernetes.io/docs/reference/kubectl/) or [K9s](https://k9scli.io/) installed locally.
-You can constrain a Pod to only run on a particular set of Node(s). There are several ways to do this and the
-recommended approaches such as, nodeSelector, node affinity, etc all use label selectors to facilitate the selection.
-Generally, such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread
-your pods across nodes so as not place the pod on a node with insufficient free resources, etc.) but there are some
-circumstances where you may want to control which node the pod deploys to - for example to ensure that a pod ends up on
-a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the
-same availability zone.
+## Enablement
-Palette enables our users to Label the nodes of a control plane and worker pool by using key/value pairs. These labels
-do not directly imply anything to the semantics of the core system but are intended to be used by users to drive use
-cases where pod affinity to specific nodes is desired. Labels can be attached to node pools in a cluster during creation
-and can be subsequently added and modified at any time. Each node pool can have a set of key/value labels defined. The
-key must be unique across all node pools for a given cluster.
+1. Log in to [Palette](https://console.spectrocloud.com).
-### Apply Labels to Nodes
+2. Navigate to the left **Main Menu** and select **Profiles**.
-Labels are optional and can be specified in the **Additional Labels** field of the node pool configuration form. Specify
-one or more values as 'key:value'. You can specify labels initially during cluster provisioning and update them any time
-by editing a node pool from the **Nodes** tab of the cluster details page.
+3. Create a cluster profile to deploy to your environment. Refer to the
+ [Create a Full Profile](../../profiles/cluster-profiles/create-cluster-profiles/create-full-profile.md) guide for
+ more information.
+
+4. Add a manifest to your cluster profile with a custom workload of your choice. Refer to the
+ [Add a Manifest](../../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-manifest-addon.md)
+ for additional guidance.
+
+5. Add pod tolerations to the pod specification of your manifest. Refer to the
+ [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) official
+ documentation page for more details.
+
+ - Specify a custom **key** and custom **value**.
+ - Palette supports the `Equal` **operator**.
+ - The **effect** defines what will happen to the pods that do not tolerate a taint. Kubernetes provides three taint
+ effects.
+
+ | **Effect** | **Description** |
+ | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------ |
+ | `NoSchedule` | Pods that cannot tolerate the node taint will not be scheduled to the node. |
+ | `PreferNoSchedule` | The system will avoid placing a non-tolerant pod to the tainted node but is not guaranteed. |
+ | `NoExecute` | New pods will not be scheduled on the node, and existing pods on the node, if any,will be evicted if they do not tolerate the taint. |
+
+ ```yaml
+ tolerations:
+ - key: "key1"
+ operator: "Equal"
+ value: "value1"
+ effect: "NoExecute"
+ ```
+
+ :::info
+
+ When using packs or Helm charts, tolerations can only be specified if they are exposed for configuration in the
+ `values.yaml` file.
+
+ :::
+
+6. Save the changes made to your cluster profile.
+
+7. Navigate to the left **Main Menu** and select **Clusters**.
+
+8. Click on **Add New Cluster**.
+
+9. Fill in the **Basic Information** for your cluster and click **Next**.
+
+10. On the **Cluster Profile** tab, select the cluster profile you previously created. Click **Next**.
+
+11. Select a **Subscription**, **Region**, and **SSH Key** on the **Cluster Config** tab. Click **Next**.
+
+12. On the **Nodes Config** tab, configure your control plane pool and worker pools by providing the instance type,
+ availability zones and disk size.
+
+13. The control plane pool and worker pool provide the **Taints (Optional)** section. Click on **Add New Taint** and
+ fill in the toleration values specified in your cluster profile. Click on **Next**.
+
+ ![Screenshot of adding taints during cluster creation](/clusters_cluster-management_taints_cluster-creation-taints.webp)
+
+ :::info
+
+ Taints can also be updated on a deployed cluster by editing a worker node pool from the **Nodes** tab of the cluster
+ details page.
+
+ :::
+
+14. Accept the default settings on the **Cluster Settings** tab and click on **Validate**.
+
+15. Click on **Finish Configuration** and deploy your cluster.
+
+ :::further
+
+ Refer to our [Deploy a Cluster](../../tutorials/cluster-deployment/public-cloud/deploy-k8s-cluster.md) tutorial for
+ detailed guidance on how to deploy a cluster with Palette using Amazon Web Services (AWS), Microsoft Azure, or
+ Google Cloud Platform (GCP) cloud providers.
+
+ :::
+
+## Validate
+
+You can follow these steps to validate that your taints and tolerations are applied successfully.
+
+1. Log in to [Palette](https://console.spectrocloud.com).
+
+2. Navigate to the left **Main Menu** and select **Clusters**.
+
+3. Select the cluster you deployed, and download the [kubeconfig](./kubeconfig.md) file.
+
+ ![Screenshot of kubeconfig file download](/clusters_cluster-management_taints_kubeconfig-download.webp)
+
+4. Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded.
+
+ ```
+ export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig
+ ```
+
+5. Confirm the cluster deployment process has scheduled your pods as expected. Remember that only pods with matching
+ tolerations can be scheduled on nodes with configured taints.
+
+ ```
+ kubectl get pods --all-namespaces --output wide --watch
+ ```
+
+ :::tip
+
+ For a more user-friendly experience, consider using [K9s](https://k9scli.io/) or a similar tool to explore your
+ cluster workloads.
+
+ :::
diff --git a/docs/docs-content/clusters/data-center/openstack.md b/docs/docs-content/clusters/data-center/openstack.md
index ebf4e6385d..e0cdeb17c5 100644
--- a/docs/docs-content/clusters/data-center/openstack.md
+++ b/docs/docs-content/clusters/data-center/openstack.md
@@ -382,31 +382,31 @@ The following steps need to be performed to provision a new OpenStack cluster:
### Control Plane Pool
-| **Parameter** | **Description** |
-| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **Name** | A descriptive name for the node pool. |
-| **Size** | Number of VMs to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. |
-| **Allow worker capability** | Select this option for allowing workloads to be provisioned on control plane nodes. |
-| **[Labels](../cluster-management/taints.md#labels)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. |
-| **[Taints](../cluster-management/taints.md#taints)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. |
-| **Instance type** | Select the compute instance type to be used for all nodes in the node pool. |
-| **Availability Zones** | Choose one or more availability zones. Palette provides fault tolerance to guard against hardware failures, network failures, etc., by provisioning nodes across availability zones if multiple zones are selected. |
-| **Disk Size** | Give the required storage size |
+| **Parameter** | **Description** |
+| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **Name** | A descriptive name for the node pool. |
+| **Size** | Number of VMs to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. |
+| **Allow worker capability** | Select this option for allowing workloads to be provisioned on control plane nodes. |
+| **[Labels](../cluster-management/node-labels.md)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. |
+| **[Taints](../cluster-management/taints.md)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. |
+| **Instance type** | Select the compute instance type to be used for all nodes in the node pool. |
+| **Availability Zones** | Choose one or more availability zones. Palette provides fault tolerance to guard against hardware failures, network failures, etc., by provisioning nodes across availability zones if multiple zones are selected. |
+| **Disk Size** | Give the required storage size |
### Worker Pool
-| **Parameter** | **Description** |
-| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **Name** | A descriptive name for the node pool. |
-| **Enable Autoscaler** | You can enable the autoscaler, by toggling the **Enable Autoscaler** button. Autoscaler scales up and down resources between the defined minimum and the maximum number of nodes to optimize resource utilization. |
-| | Set the scaling limit by setting the **Minimum Size** and **Maximum Size**, as per the workload the number of nods will scale up from minimum set value to maximum set value and the scale down from maximum set value to minimum set value |
-| **Size** | Number of VMs to be provisioned for the node pool. |
-| **Rolling Update** | Rolling update has two available options. The expand option launches a new node first, then shuts down old one. The contract option shuts down a old one first, then launches new one. |
-| **[Labels](../cluster-management/taints.md#labels)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. |
-| **[Taints](../cluster-management/taints.md#taints)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. |
-| **Instance type** | Select the compute instance type to be used for all nodes in the node pool. |
-| **Availability Zones** | Choose one or more availability zones. Palette provides fault tolerance to guard against hardware failures, network failures, etc., by provisioning nodes across availability zones if multiple zones are selected. |
-| **Disk Size** | Provide the required storage size |
+| **Parameter** | **Description** |
+| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **Name** | A descriptive name for the node pool. |
+| **Enable Autoscaler** | You can enable the autoscaler, by toggling the **Enable Autoscaler** button. Autoscaler scales up and down resources between the defined minimum and the maximum number of nodes to optimize resource utilization. |
+| | Set the scaling limit by setting the **Minimum Size** and **Maximum Size**, as per the workload the number of nods will scale up from minimum set value to maximum set value and the scale down from maximum set value to minimum set value |
+| **Size** | Number of VMs to be provisioned for the node pool. |
+| **Rolling Update** | Rolling update has two available options. The expand option launches a new node first, then shuts down old one. The contract option shuts down a old one first, then launches new one. |
+| **[Labels](../cluster-management/node-labels.md)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. |
+| **[Taints](../cluster-management/taints.md)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. |
+| **Instance type** | Select the compute instance type to be used for all nodes in the node pool. |
+| **Availability Zones** | Choose one or more availability zones. Palette provides fault tolerance to guard against hardware failures, network failures, etc., by provisioning nodes across availability zones if multiple zones are selected. |
+| **Disk Size** | Provide the required storage size |
6. Configure the cluster policies/features.
diff --git a/docs/docs-content/clusters/public-cloud/aws/eks.md b/docs/docs-content/clusters/public-cloud/aws/eks.md
index 12c401daf5..dec6f1cf14 100644
--- a/docs/docs-content/clusters/public-cloud/aws/eks.md
+++ b/docs/docs-content/clusters/public-cloud/aws/eks.md
@@ -123,12 +123,12 @@ Use the following steps to deploy an EKS cluster on AWS.
#### Node Configuration Settings
- | **Parameter** | **Description** |
- | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
- | **Node pool name** | A descriptive name for the node pool. |
- | **Number of nodes in the pool** | Specify the number of nodes in the worker pool. |
- | **Additional Labels** | You can add optional labels to nodes in key-value format. For more information about applying labels, review [Apply Labels to Nodes](../../cluster-management/taints.md#apply-labels-to-nodes). Example: `"environment": "production"` |
- | **Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. Review the [Node Pool](../../cluster-management/node-pool.md) management page and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) page to learn more. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted. |
+ | **Parameter** | **Description** |
+ | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+ | **Node pool name** | A descriptive name for the node pool. |
+ | **Number of nodes in the pool** | Specify the number of nodes in the worker pool. |
+ | **Additional Labels** | You can add optional labels to nodes in key-value format. For more information about applying labels, review [Node Labels](../../cluster-management/node-labels.md) guide. Example: `"environment": "production"` |
+ | **Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. Review the [Node Pool](../../cluster-management/node-pool.md) management page and [Taints and Tolerations](../../cluster-management/taints.md) guide to learn more. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted. |
#### Cloud Configuration settings
diff --git a/docs/docs-content/clusters/public-cloud/azure/aks.md b/docs/docs-content/clusters/public-cloud/azure/aks.md
index 9505baa92d..117892c679 100644
--- a/docs/docs-content/clusters/public-cloud/azure/aks.md
+++ b/docs/docs-content/clusters/public-cloud/azure/aks.md
@@ -215,7 +215,7 @@ explains how you can create an Azure AKS cluster managed by Palette.
| **Enable Autoscaler** | Whether Palette should scale the pool horizontally based on its per-node workload counts. If enabled, instead of the **Number of nodes in the pool** parameter, you will have to configure the **Minimum size** and **Maximum size** parameters, which will allow AKS to adjust the node pool size based on the workload. You can set the node count to a minimum of zero and a maximum of 1000. Setting both parameters to the same value results in a static node count. |
| **System Node Pool** | Sets the pool to be a system node pool. |
| **Number of nodes in the pool** | A statically defined number of nodes in the system pool. |
- | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. |
+ | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Node Labels](../../cluster-management/node-labels.md). Example: `environment:production`. |
#### System Node Pool Cloud Configuration
@@ -231,14 +231,14 @@ explains how you can create an Azure AKS cluster managed by Palette.
The following table describes how to configure a worker node pool.
- | **Parameter** | **Description** |
- | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
- | **Node pool name** | A descriptive name for the node pool. |
- | **Enable Autoscaler** | Whether Palette should scale the pool horizontally based on its per-node workload counts. If enabled, instead of the **Number of nodes in the pool** parameter, you will have to configure the **Minimum size** and **Maximum size** parameters, which will allow AKS to adjust the node pool size based on the workload. You can set the node count to a minimum of zero and a maximum of 1000. Setting both parameters to the same value results in a static node count. |
- | **System Node Pool** | Sets the pool to be a system node pool. |
- | **Number of nodes in the pool** | A statically defined number of nodes in the system pool. |
- | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. |
- | **Taints** | You can apply optional taint labels to a worker node pool. Review the [Node Pool](../../cluster-management/node-pool.md) and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) guides to learn more.
Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
- **NoSchedule**—Pods are not scheduled onto nodes with this taint.
- **PreferNoSchedule**—Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
- **NoExecute**—Existing pods on nodes with this taint are evicted. |
+ | **Parameter** | **Description** |
+ | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+ | **Node pool name** | A descriptive name for the node pool. |
+ | **Enable Autoscaler** | Whether Palette should scale the pool horizontally based on its per-node workload counts. If enabled, instead of the **Number of nodes in the pool** parameter, you will have to configure the **Minimum size** and **Maximum size** parameters, which will allow AKS to adjust the node pool size based on the workload. You can set the node count to a minimum of zero and a maximum of 1000. Setting both parameters to the same value results in a static node count. |
+ | **System Node Pool** | Sets the pool to be a system node pool. |
+ | **Number of nodes in the pool** | A statically defined number of nodes in the system pool. |
+ | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Node Labels](../../cluster-management/node-labels.md). Example: `environment:production`. |
+ | **Taints** | You can apply optional taint labels to a worker node pool. Review the [Node Pool](../../cluster-management/node-pool.md) and [Taints and Tolerations](../../cluster-management/taints.md) guides to learn more.
Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
- **NoSchedule**—Pods are not scheduled onto nodes with this taint.
- **PreferNoSchedule**—Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
- **NoExecute**—Existing pods on nodes with this taint are evicted. |
#### Worker Node Pool Cloud Configuration
diff --git a/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md b/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md
index 1f02d93570..e6181d03d2 100644
--- a/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md
+++ b/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md
@@ -145,13 +145,13 @@ Standard_NC12s_v3 can be configured for Graphics Processing Unit (GPU) workloads
#### Control Plane Pool Configuration Settings
-| **Parameter** | **Description** |
-| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| **Node pool name** | A descriptive name for the node pool. |
-| **Number of nodes in the pool** | Specify the number of nodes in the control plane pool. |
-| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. |
-| **Additional Labels** | You can add optional labels to nodes in key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. |
-| **Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. Review the [Node Pool](../../cluster-management/node-pool.md) management page and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) page to learn more. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted. |
+| **Parameter** | **Description** |
+| ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **Node pool name** | A descriptive name for the node pool. |
+| **Number of nodes in the pool** | Specify the number of nodes in the control plane pool. |
+| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. |
+| **Additional Labels** | You can add optional labels to nodes in key-value format. To learn more, review [Node Labels](../../cluster-management/node-labels.md). Example: `environment:production`. |
+| **Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. Review the [Node Pool](../../cluster-management/node-pool.md) management page and [Taints and Tolerations](../../cluster-management/taints.md) page to learn more. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted. |
#### Cloud Configuration Settings for Control Plane Pool
@@ -171,8 +171,8 @@ You can select **Remove** at right to remove the worker node if all you want is
|**Number of nodes in the pool** | Specify the number of nodes in the worker pool.|
|**Node repave interval** | Optionally, you can specify the preferred time interval for Palette to perform a rolling upgrade on nodes when it detects a change in the Kubeadm configuration file. |
|**Rolling update** | These options allow you to control the sequence of operations during a node pool update. Choose the **Expand first** option to add new nodes with updated configurations to the node pool before the existing nodes are removed. Choose **Contract first** to remove existing nodes from the node pool before the new nodes with updated configurations are added. |
- |**Additional Labels** | You can add optional labels to nodes in key-value format. For more information about applying labels, review [Apply Labels to Nodes](../../cluster-management/taints.md#apply-labels-to-nodes). Example: `environment:production`. |
- |**Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. To learn more, review the [Node Pool](../../cluster-management/node-pool.md) management page and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) page. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted.|
+ |**Additional Labels** | You can add optional labels to nodes in key-value format. For more information about applying labels, review [Node Labels](../../cluster-management/node-labels.md). Example: `environment:production`. |
+ |**Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. To learn more, review the [Node Pool](../../cluster-management/node-pool.md) management page and [Taints and Tolerations](../../cluster-management/taints.md) page. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted.|
#### Cloud Configuration Settings for Worker Pool
diff --git a/docs/docs-content/tutorials/edge/deploy-cluster.md b/docs/docs-content/tutorials/edge/deploy-cluster.md
index 4091430eab..958ec3861a 100644
--- a/docs/docs-content/tutorials/edge/deploy-cluster.md
+++ b/docs/docs-content/tutorials/edge/deploy-cluster.md
@@ -997,13 +997,13 @@ and the set of worker nodes is the worker pool.
Provide the following details for the control plane pool.
-| **Field** | **Value for the control-plane-pool** |
-| ------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------- |
-| Node pool name | control-plane-pool |
-| Allow worker capability | Checked |
-| Additional Labels (Optional) | None |
-| [Taints](../../clusters/cluster-management/taints.md#taints) | Off |
-| Pool Configuration > Edge Hosts | Choose one of the registered Edge hosts.
Palette will automatically display the Nic Name for the selected host. |
+| **Field** | **Value for the control-plane-pool** |
+| ----------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
+| Node pool name | control-plane-pool |
+| Allow worker capability | Checked |
+| Additional Labels (Optional) | None |
+| [Taints](../../clusters/cluster-management/taints.md) | Off |
+| Pool Configuration > Edge Hosts | Choose one of the registered Edge hosts.
Palette will automatically display the Nic Name for the selected host. |
The screenshot below shows an Edge host added to the control plane pool.
diff --git a/static/assets/docs/images/clusters_cluster-management_node-labels_cluster-creation-labels.webp b/static/assets/docs/images/clusters_cluster-management_node-labels_cluster-creation-labels.webp
new file mode 100644
index 0000000000..97e2d07d3f
Binary files /dev/null and b/static/assets/docs/images/clusters_cluster-management_node-labels_cluster-creation-labels.webp differ
diff --git a/static/assets/docs/images/clusters_cluster-management_node-labels_kubeconfig-download.webp b/static/assets/docs/images/clusters_cluster-management_node-labels_kubeconfig-download.webp
new file mode 100644
index 0000000000..fb94321a4c
Binary files /dev/null and b/static/assets/docs/images/clusters_cluster-management_node-labels_kubeconfig-download.webp differ
diff --git a/static/assets/docs/images/clusters_cluster-management_taints_cluster-creation-taints.webp b/static/assets/docs/images/clusters_cluster-management_taints_cluster-creation-taints.webp
new file mode 100644
index 0000000000..60c2f1a733
Binary files /dev/null and b/static/assets/docs/images/clusters_cluster-management_taints_cluster-creation-taints.webp differ
diff --git a/static/assets/docs/images/clusters_cluster-management_taints_kubeconfig-download.webp b/static/assets/docs/images/clusters_cluster-management_taints_kubeconfig-download.webp
new file mode 100644
index 0000000000..fb94321a4c
Binary files /dev/null and b/static/assets/docs/images/clusters_cluster-management_taints_kubeconfig-download.webp differ