diff --git a/docs/docs-content/audit-logs/kube-api-audit-logging.md b/docs/docs-content/audit-logs/kube-api-audit-logging.md index 013780ab1f..2989afc7e0 100644 --- a/docs/docs-content/audit-logs/kube-api-audit-logging.md +++ b/docs/docs-content/audit-logs/kube-api-audit-logging.md @@ -37,7 +37,7 @@ request to facilitate auditing. Memory consumption depends on the audit logging - Write access to the file system. -- Remote access to the Kubernetes cluster master nodes. +- Remote access to the Kubernetes cluster control plane nodes. ## Enable Auditing @@ -50,7 +50,7 @@ levels, visit the Kubernetes API 2. Identify one of your cluster contro-plane nodes. You can find a cluster node by navigating to the left **Main Menu** and selecting **Clusters**. Click on your cluster to access the details pages and click on the **Nodes** tab. The tab - contains information about each pool, select a node from the **Master Pool** to view its IP address. + contains information about each pool, select a node from the **Control Plane Pool** to view its IP address. 3. SSH into one of your control-plane nodes using its IP address and the SSH key you specified during the cluster creation process. @@ -175,7 +175,7 @@ parameter. 2. Identify one of your cluster contro-plane nodes. You find a cluster node by navigating to the left **Main Menu** and selecting **Clusters**. Click on your cluster to access the details pages and click on the **Nodes** tab. The tab - contains information about each pool, select a node from the **Master Pool** to view its IP address. + contains information about each pool, select a node from the **Control Plane Pool** to view its IP address. 3. SSH into one of your control-plane nodes using its IP address and the SSH key you specified during the cluster creation process. diff --git a/docs/docs-content/clusters/cluster-management/cloud-cost.md b/docs/docs-content/clusters/cluster-management/cloud-cost.md index 1e8e01944e..ff153b2985 100644 --- a/docs/docs-content/clusters/cluster-management/cloud-cost.md +++ b/docs/docs-content/clusters/cluster-management/cloud-cost.md @@ -18,22 +18,22 @@ done based on the instance type and storage type selected for each machine pool. | | **FORMULAS FOR CALCULATION** | | --- | ------------------------------------------------------------------------------------------- | | | Machine Pool Cost = ( Number of Nodes X Instance Price ) + ( Storage Size X Storage Price ) | -| | Cluster Cloud Cost = Master Pool Cost + Worker Pool Cost | +| | Cluster Cloud Cost = control plane pool cost + worker pool cost | **Example 1:** Let's assume that a cluster ‘demo’ is launched with two machine pools with the following configuration: -| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST | -| ------------ | ---- | --------------------------- | ---------------------------- | -| MASTER POOL | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) | -| WORKER POOL | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) | +| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST | +| ------------- | ---- | --------------------------- | ---------------------------- | +| Control Plane | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) | +| Worker Pool | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) | -| Calculation for the above scenario | -| --------------------------------------------------------------------- | -| master-pool cost = ( 3 X $0.0496 ) + ( 60 X $0.00014 ) = $0.1572/hour | -| worker-pool cost = ( 3 X $0.0992 ) + ( 60 X $0.00014 ) = $0.306/hour | -| Cluster Cloud Cost = $0.1572 + $0.306 = $0.4632/hour | +| Calculation for the above scenario | +| ---------------------------------------------------------------------------- | +| control-plane-pool cost = ( 3 X $0.0496 ) + ( 60 X $0.00014 ) = $0.1572/hour | +| worker-pool cost = ( 3 X $0.0992 ) + ( 60 X $0.00014 ) = $0.306/hour | +| Cluster Cloud Cost = $0.1572 + $0.306 = $0.4632/hour | :::info @@ -63,7 +63,7 @@ category. **Example 2** -For the cluster configuration of master-pool & worker-pool considers in example 1, +For the cluster configuration of control-plane-pool and worker-pool considers in example 1, | Calculation for the example scenario | | ------------------------------------------------------------------------------- | diff --git a/docs/docs-content/clusters/cluster-management/compliance-scan.md b/docs/docs-content/clusters/cluster-management/compliance-scan.md index edf5eaf07c..e79dfff5de 100644 --- a/docs/docs-content/clusters/cluster-management/compliance-scan.md +++ b/docs/docs-content/clusters/cluster-management/compliance-scan.md @@ -30,9 +30,9 @@ This scan examines the compliance of deployed Kubernetes security features again Kubernetes Benchmarks are consensus-driven security guidelines for the Kubernetes. Different releases of the CIS benchmark cover different releases of Kubernetes. By default, Kubernetes configuration security will determine the test set based on the Kubernetes version running on the cluster being scanned. Internally, Palette leverages an open-source -tool called KubeBench from Aqua Security to perform this scan. Scans are run against master and worker nodes of the -Kubernetes cluster, and a combined report is made available on the UI. Users can filter the report to view only the -master or worker results if required. +tool called KubeBench from Aqua Security to perform this scan. Scans are run against control plane and worker nodes of +the Kubernetes cluster, and a combined report is made available on the UI. Users can filter the report to view only the +control plane or worker results if required. All the tests in the report are marked as Scored or Not Scored. The ones marked Not Scored cannot be automatically run, and it is suggested to be tested manually. diff --git a/docs/docs-content/clusters/cluster-management/node-pool.md b/docs/docs-content/clusters/cluster-management/node-pool.md index 76a76a8107..93a9d2c178 100644 --- a/docs/docs-content/clusters/cluster-management/node-pool.md +++ b/docs/docs-content/clusters/cluster-management/node-pool.md @@ -40,9 +40,10 @@ Different types of repaving operations may occur, depending on what causes them: Kubernetes layer impact all nodes, such as when upgrading to a different Kubernetes version. All nodes across all pools are sequentially repaved starting with the control plane. -You can customize the repave time interval for all node pools except the master pool. The default repave time interval -is 0 seconds. You can adjust the node repave time interval during or after cluster creation. If you need to modify the -repave time interval post-cluster creation, follow the [Change a Node Pool](#change-a-node-pool) instructions below. +You can customize the repave time interval for all node pools except the control plane pool. The default repave time +interval is 0 seconds. You can adjust the node repave time interval during or after cluster creation. If you need to +modify the repave time interval post-cluster creation, follow the [Change a Node Pool](#change-a-node-pool) instructions +below. ## Node Pool Configuration Settings @@ -51,13 +52,13 @@ settings may not be available.
-### Master Node Pool +### Control Plane Node Pool | **Property** | **Description** | | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Node pool name** | A descriptive name for the node pool. | -| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. For the master pool, this number can be 1, 3, or 5. | -| **Allow worker capability** | Select this option to allow workloads to be provisioned on master nodes. | +| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. | +| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. | | **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Overview on Labels](taints.md#labels). | | **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Overview on Taints](taints.md#taints). | | **Availability Zones** | The Availability Zones from which to select available servers for deployment. If you select multiple zones, Palette will deploy servers evenly across them as long as sufficient servers are available to do so. | diff --git a/docs/docs-content/clusters/cluster-management/reconfigure.md b/docs/docs-content/clusters/cluster-management/reconfigure.md index 59a3d4b294..32f9862f43 100644 --- a/docs/docs-content/clusters/cluster-management/reconfigure.md +++ b/docs/docs-content/clusters/cluster-management/reconfigure.md @@ -17,8 +17,8 @@ cluster: :::info -The master node pool is scaled from 1 to 3 or 3 to 5 nodes, etc. However, the scale-down operation is not supported for -master nodes. +The control plane node pool is scaled from 1 to 3 or 3 to 5 nodes, etc. However, the scale-down operation is not +supported for control plane nodes. ::: diff --git a/docs/docs-content/clusters/cluster-management/taints.md b/docs/docs-content/clusters/cluster-management/taints.md index 7e2fe8556a..e2b4e1a2f4 100644 --- a/docs/docs-content/clusters/cluster-management/taints.md +++ b/docs/docs-content/clusters/cluster-management/taints.md @@ -51,11 +51,11 @@ circumstances where you may want to control which node the pod deploys to - for a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone. -Palette enables our users to Label the nodes of a master and worker pool by using key/value pairs. These labels do not -directly imply anything to the semantics of the core system but are intended to be used by users to drive use cases -where pod affinity to specific nodes is desired. Labels can be attached to node pools in a cluster during creation and -can be subsequently added and modified at any time. Each node pool can have a set of key/value labels defined. The key -must be unique across all node pools for a given cluster. +Palette enables our users to Label the nodes of a control plane and worker pool by using key/value pairs. These labels +do not directly imply anything to the semantics of the core system but are intended to be used by users to drive use +cases where pod affinity to specific nodes is desired. Labels can be attached to node pools in a cluster during creation +and can be subsequently added and modified at any time. Each node pool can have a set of key/value labels defined. The +key must be unique across all node pools for a given cluster. ### Apply Labels to Nodes diff --git a/docs/docs-content/clusters/cluster-management/update-k8s-cluster.md b/docs/docs-content/clusters/cluster-management/update-k8s-cluster.md index fe8c10a7dc..b04b773051 100644 --- a/docs/docs-content/clusters/cluster-management/update-k8s-cluster.md +++ b/docs/docs-content/clusters/cluster-management/update-k8s-cluster.md @@ -99,7 +99,7 @@ cd tutorials/ Check out the following git tag. ```shell -git checkout v1.1.0 +git checkout v1.1.2 ``` Change the directory to the tutorial code. @@ -736,13 +736,13 @@ resource "spectrocloud_cluster_azure" "azure-cluster" { machine_pool { control_plane = true control_plane_as_worker = true - name = "master-pool" - count = var.azure_master_nodes.count - instance_type = var.azure_master_nodes.instance_type - azs = var.azure-use-azs ? var.azure_master_nodes.azs : [""] - is_system_node_pool = var.azure_master_nodes.is_system_node_pool + name = "control-plane-pool" + count = var.azure_control_plane_nodes.count + instance_type = var.azure_control_plane_nodes.instance_type + azs = var.azure-use-azs ? var.azure_control_plane_nodes.azs : [""] + is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool disk { - size_gb = var.azure_master_nodes.disk_size_gb + size_gb = var.azure_control_plane_nodes.disk_size_gb type = "Standard_LRS" } } diff --git a/docs/docs-content/clusters/clusters.md b/docs/docs-content/clusters/clusters.md index d16584b3af..8eea16aedc 100644 --- a/docs/docs-content/clusters/clusters.md +++ b/docs/docs-content/clusters/clusters.md @@ -65,14 +65,14 @@ available to the users to apply to their existing clusters at a time convenient ### Kubernetes Kubernetes components and configuration are hardened in accordance with the Kubernetes CIS Benchmark. Palette executes -Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the master and worker nodes are -configured securely. +Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the control plane and worker +nodes are configured securely. ### Cloud Infrastructure Palette follows security best practices recommended by the various cloud providers when provisioning and configuring the computing, network, and storage infrastructure for the Kubernetes clusters. These include practices such as isolating -master and worker nodes in dedicated network domains and limiting access through the use constructs like security +control plane and worker nodes in dedicated network domains and limiting access through the use constructs like security groups. :::info diff --git a/docs/docs-content/clusters/data-center/maas/create-manage-maas-clusters.md b/docs/docs-content/clusters/data-center/maas/create-manage-maas-clusters.md index 39e4cb58bf..3975946057 100644 --- a/docs/docs-content/clusters/data-center/maas/create-manage-maas-clusters.md +++ b/docs/docs-content/clusters/data-center/maas/create-manage-maas-clusters.md @@ -52,11 +52,11 @@ To deploy a new MAAS cluster: 9. Select a domain from the **Domain drop-down Menu** and click **Next**. -10. Configure the master and worker node pools. The following input fields apply to MAAS master and worker node pools. - For a description of input fields that are common across target platforms refer to the +10. Configure the control plane and worker node pools. The following input fields apply to MAAS control plane and worker + node pools. For a description of input fields that are common across target platforms refer to the [Node Pools](../../cluster-management/node-pool.md) management page. Click **Next** when you are done. - #### Master Pool configuration + #### Control Plane Pool configuration - Cloud configuration: diff --git a/docs/docs-content/clusters/data-center/nutanix/create-manage-nutanix-cluster.md b/docs/docs-content/clusters/data-center/nutanix/create-manage-nutanix-cluster.md index 418b903998..7b2ce15d2f 100644 --- a/docs/docs-content/clusters/data-center/nutanix/create-manage-nutanix-cluster.md +++ b/docs/docs-content/clusters/data-center/nutanix/create-manage-nutanix-cluster.md @@ -87,12 +87,12 @@ The following applies when replacing variables within curly braces in the YAML c ::: -9. In the Node pool configuration YAML files for the master and worker pools, edit the files to replace each occurrence - of the variables within curly braces listed in the tables below with values that apply to your Nutanix cloud - environment. You can configure scaling in the Palette UI by specifying the number of nodes in the pool. This +9. In the Node pool configuration YAML files for the control plane and worker pools, edit the files to replace each + occurrence of the variables within curly braces listed in the tables below with values that apply to your Nutanix + cloud environment. You can configure scaling in the Palette UI by specifying the number of nodes in the pool. This corresponds to `replicas` in the YAML file. -#### Master Pool +#### Control Plane Pool | **Variable** | **Description** | | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | diff --git a/docs/docs-content/clusters/data-center/openstack.md b/docs/docs-content/clusters/data-center/openstack.md index f98c8c9ec0..65dd9777d3 100644 --- a/docs/docs-content/clusters/data-center/openstack.md +++ b/docs/docs-content/clusters/data-center/openstack.md @@ -619,16 +619,16 @@ The following steps need to be performed to provision a new OpenStack cluster: - Subnet CIDR - DNS Name Server -5. Configure the master and worker node pools. Fill out the input fields in the **Add node pool** page. The following - table contains an explanation of the available input parameters. +5. Configure the control plane and worker node pools. Fill out the input fields in the **Add node pool** page. The + following table contains an explanation of the available input parameters. -### Master Pool +### Control Plane Pool | **Parameter** | **Description** | | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Name** | A descriptive name for the node pool. | -| **Size** | Number of VMs to be provisioned for the node pool. For the master pool, this number can be 1, 3, or 5. | -| **Allow worker capability** | Select this option for allowing workloads to be provisioned on master nodes. | +| **Size** | Number of VMs to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. | +| **Allow worker capability** | Select this option for allowing workloads to be provisioned on control plane nodes. | | **[Labels](../cluster-management/taints.md#labels)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. | | **[Taints](../cluster-management/taints.md#taints)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. | | **Instance type** | Select the compute instance type to be used for all nodes in the node pool. | diff --git a/docs/docs-content/clusters/data-center/vmware.md b/docs/docs-content/clusters/data-center/vmware.md index 7616d7dbdf..d7ad95d959 100644 --- a/docs/docs-content/clusters/data-center/vmware.md +++ b/docs/docs-content/clusters/data-center/vmware.md @@ -724,16 +724,16 @@ Use the following steps to provision a new VMware cluster. | **NTP Server (Optional)** | Setup time synchronization for all the running nodes. | | **IP Allocation strategy** | DHCP or Static IP | -5. Configure the master and worker node pools. Fill out the input fields in the **Add node pool** page. The following - table contains an explanation of the available input parameters. +5. Configure the control plane and worker node pools. Fill out the input fields in the **Add node pool** page. The + following table contains an explanation of the available input parameters. -### Master Pool +### Control Plane Pool | **Parameter** | **Description** | | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Name** | A descriptive name for the node pool. | -| **Size** | Number of VMs to be provisioned for the node pool. For the master pool, this number can be 1, 3, or 5. | -| **Allow worker capability** | Select this option for allowing workloads to be provisioned on master nodes. | +| **Size** | Number of VMs to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. | +| **Allow worker capability** | Select this option for allowing workloads to be provisioned on control plane nodes. | | **[Labels](../cluster-management/taints.md#labels)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. | | **[Taints](../cluster-management/taints.md#taints)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. | | **Instance type** | Select the compute instance type to be used for all nodes in the node pool. | diff --git a/docs/docs-content/clusters/edge/site-deployment/deploy-cluster.md b/docs/docs-content/clusters/edge/site-deployment/deploy-cluster.md index 9da345570c..5bde78ee0a 100644 --- a/docs/docs-content/clusters/edge/site-deployment/deploy-cluster.md +++ b/docs/docs-content/clusters/edge/site-deployment/deploy-cluster.md @@ -884,22 +884,22 @@ Click **Next** to continue. ### Nodes configuration In this section, you will use the Edge hosts to create the cluster nodes. Use one of the Edge hosts as the control plane -node and the remaining two as worker nodes. In this example, the control plane node is called the master pool, and the -set of worker nodes is the worker pool. +node and the remaining two as worker nodes. In this example, the control plane node is called the control plane pool, +and the set of worker nodes is the worker pool. -Provide the following details for the master pool. +Provide the following details for the control plane pool. -| **Field** | **Value for the master-pool** | +| **Field** | **Value for the control-plane-pool** | | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | -| Node pool name | master-pool | +| Node pool name | control-plane-pool | | Allow worker capability | Checked | | Additional Labels (Optional) | None | | [Taints](../../cluster-management/taints.md#taints) | Off | | Pool Configuration > Edge Hosts | Choose one of the registered Edge hosts.
Palette will automatically display the Nic Name for the selected host. | -The screenshot below shows an Edge host added to the master pool. +The screenshot below shows an Edge host added to the control plane pool. -![Screenshot of an Edge host added to the master pool.](/tutorials/edge/clusters_edge_deploy-cluster_add-master-node.png) +![Screenshot of an Edge host added to the control plane pool.](/tutorials/edge/clusters_edge_deploy-cluster_add-master-node.png) Similarly, provide details for the worker pool, and add the remaining two Edge hosts to the worker pool. diff --git a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md index b5ec42b933..1757eab63b 100644 --- a/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md +++ b/docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md @@ -55,7 +55,8 @@ You can also select any SSH keys in case you need to remote into the host cluste Network Time Protocol (NTP) servers. Click on **Next**. 9. The node configuration page is where you can specify what Edge hosts make up the host cluster. Assign Edge hosts to - the **master-pool** and the **worker-pool**. When you have completed configuring the node pools, click on **Next**. + the **control-plane-pool** and the **worker-pool**. When you have completed configuring the node pools, click on + **Next**. 10. (Optional) When you assign Edge hosts to node pools, you can optionally specify a static IP address for each Edge host. If you want to specify a static IP, toggle on **Static IP** and provide the following information: @@ -77,13 +78,13 @@ If the NIC is configured on the Edge host network, an IP address is displayed ne not configured on the Edge host network, you can specify its IP address, default gateway, subnet mask, as well as DNS server to configure it. -If you choose to change the default NIC used by your nodes in the master node pool, you need to make sure all the NICs -in the master node pool share the same name. You also must make corresponding changes in the Kubernetes layer and the -Container Network Interface (CNI) layer. +If you choose to change the default NIC used by your nodes in the control plane node pool, you need to make sure all the +NICs in the control plane node pool share the same name. You also must make corresponding changes in the Kubernetes +layer and the Container Network Interface (CNI) layer. In the Kubernetes layer, enter a new parameter `cluster.kubevipArgs.vip_interface` and set its value to the name of the -NIC used by your master nodes. For example, if the NIC used by the nodes in your master pool is named `ens32`, add the -following two lines. +NIC used by your control plane nodes. For example, if the NIC used by the nodes in your control plane pool is named +`ens32`, add the following two lines. ```yaml {3} cluster: @@ -97,7 +98,7 @@ following locations. - In the Calico pack YAML file default template, uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=INTERFACE_NAME`. Replace `INTERFACE_NAME` with the name of the NIC in your master node pool. For example, set `IP_AUTODETECTION_METHOD` to `"interface=eno32"` if the NIC name of the nodes in your master pool is `eno32`. + In the Calico pack YAML file default template, uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=INTERFACE_NAME`. Replace `INTERFACE_NAME` with the name of the NIC in your control plane node pool. For example, set `IP_AUTODETECTION_METHOD` to `"interface=eno32"` if the NIC name of the nodes in your control plane pool is `eno32`. ```yaml {11} manifests: @@ -117,7 +118,7 @@ following locations. In the Flannel pack YAML file, add a line `- "--iface=INTERFACE_NAME"` in the default template under `charts.flannel.args`. Replace `INTERFACE_NAME` with the name of the NIC. For example, add the line `- "--iface=eno32` -if the NIC name of your master nodes is `eno32`. +if the NIC name of your control plane nodes is `eno32`. ```yaml {8} charts: @@ -233,12 +234,12 @@ If the NIC is configured on the Edge host network, an IP address is displayed ne not configured on the Edge host network, you can specify its IP address, default gateway, subnet mask, as well as DNS server to configure it. -If you choose to change the default NIC used by your nodes, you need to make sure all the NICs in the master node pool -share the same name. You also must make corresponding changes in the Kubernetes layer and the CNI layer. +If you choose to change the default NIC used by your nodes, you need to make sure all the NICs in the control plane node +pool share the same name. You also must make corresponding changes in the Kubernetes layer and the CNI layer. In the Kubernetes layer, enter a new parameter `cluster.kubevipArgs.vip_interface` and set its value to the name of the -NIC used by your master nodes. For example, if the NIC used by the nodes in your master pool is named `ens32`, add the -following two lines. +NIC used by your control plane nodes. For example, if the NIC used by the nodes in your control plane pool is named +`ens32`, add the following two lines. ```yaml {2-3} cluster: @@ -252,7 +253,7 @@ following locations. - In the Calico pack YAML file default template, uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=INTERFACE_NAME`. Replace `INTERFACE_NAME` with the name of the NIC in your master node pool. For example, set `IP_AUTODETECTION_METHOD` to `"interface=eno32"` if the NIC name of the nodes in your master pool is `eno32`. + In the Calico pack YAML file default template, uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=INTERFACE_NAME`. Replace `INTERFACE_NAME` with the name of the NIC in your control plane node pool. For example, set `IP_AUTODETECTION_METHOD` to `"interface=eno32"` if the NIC name of the nodes in your control plane pool is `eno32`. ```yaml {11} manifests: @@ -272,7 +273,7 @@ following locations. In the Flannel pack YAML file, add a line `- "--iface=INTERFACE_NAME"` in the default template under `charts.flannel.args`. Replace `INTERFACE_NAME` with the name of the NIC. For example, add the line `- "--iface=eno32` -if the NIC name of your master nodes is `eno32`. +if the NIC name of your control plane nodes is `eno32`. ```yaml {8} charts: diff --git a/docs/docs-content/clusters/public-cloud/aws/create-cluster.md b/docs/docs-content/clusters/public-cloud/aws/create-cluster.md index 6d99bae8b8..d032cc1e0d 100644 --- a/docs/docs-content/clusters/public-cloud/aws/create-cluster.md +++ b/docs/docs-content/clusters/public-cloud/aws/create-cluster.md @@ -95,12 +95,12 @@ Use the following steps to provision a new AWS cluster: | **Control plane subnet**: Select the control plane network from the **drop-down Menu**. | | **Worker Network**: Select the worker network from the **drop-down Menu**. | -10. Configure the master and worker node pools. A master and a worker node pool are configured by default. This is the - section where you can specify the availability zones (AZ), instance types, +10. Configure the control plane and worker node pools. A control plane and a worker node pool are configured by default. + This is the section where you can specify the availability zones (AZ), instance types, [instance cost type](architecture.md#spot-instances), disk size, and the number of nodes. Click on **Next** after you have completed configuring the node pool. The minimum number of CPUs and amount of memory depend on your cluster - profile, but in general you need at least 4 CPUs and 4 GB of memory both in the master pool and across all worker - pools. + profile, but in general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and across all + worker pools.
diff --git a/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md b/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md index 58208dd7a7..3c643ca276 100644 --- a/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md +++ b/docs/docs-content/clusters/public-cloud/azure/create-azure-cluster.md @@ -136,27 +136,27 @@ Each subnet allows you to specify the CIDR range and a security group. :::info -By default, a master pool and one worker node pool are configured. You can add new worker pools to customize certain -worker nodes for specialized workloads. For example, the default worker pool can be configured with the Standard_D2_v2 -instance types for general-purpose workloads, and another worker pool with instance type Standard_NC12s_v3 can be -configured for Graphics Processing Unit (GPU) workloads. +By default, a control plane pool and one worker node pool are configured. You can add new worker pools to customize +certain worker nodes for specialized workloads. For example, the default worker pool can be configured with the +Standard_D2_v2 instance types for general-purpose workloads, and another worker pool with instance type +Standard_NC12s_v3 can be configured for Graphics Processing Unit (GPU) workloads. ::: You can apply autoscale capability to dynamically increase resources during high loads and reduce them during low loads. To learn more, refer to [Enable Autoscale for Azure IaaS Cluster](#enable-autoscale-for-azure-iaas-cluster). -#### Master Pool Configuration Settings +#### Control Plane Pool Configuration Settings | **Parameter** | **Description** | | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Node pool name** | A descriptive name for the node pool. | -| **Number of nodes in the pool** | Specify the number of nodes in the master pool. | -| **Allow worker capability** | Select this option to allow workloads to be provisioned on master nodes. | +| **Number of nodes in the pool** | Specify the number of nodes in the control plane pool. | +| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. | | **Additional Labels** | You can add optional labels to nodes in key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. | | **Taints** | You can apply optional taint labels to a node pool during cluster creation or edit taint labels on an existing cluster. Review the [Node Pool](../../cluster-management/node-pool.md) management page and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) page to learn more. Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
**NoSchedule** - Pods are not scheduled onto nodes with this taint.
**PreferNoSchedule** - Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
**NoExecute** - Existing pods on nodes with this taint are evicted. | -#### Cloud Configuration Settings for Master Pool +#### Cloud Configuration Settings for Control Plane Pool | **Parameter** | **Description** | | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -179,7 +179,7 @@ You can select **Remove** at right to remove the worker node if all you want is #### Cloud Configuration Settings for Worker Pool - You can copy cloud configuration settings from the master pool, but be aware that the instance type might not get copied if it does not have accessible availability zones. + You can copy cloud configuration settings from the control plane pool, but be aware that the instance type might not get copied if it does not have accessible availability zones. |**Parameter**| **Description**| |-------------|----------------| diff --git a/docs/docs-content/clusters/public-cloud/cox-edge/create-cox-cluster.md b/docs/docs-content/clusters/public-cloud/cox-edge/create-cox-cluster.md index 9582b21ba5..fa1dbfceb7 100644 --- a/docs/docs-content/clusters/public-cloud/cox-edge/create-cox-cluster.md +++ b/docs/docs-content/clusters/public-cloud/cox-edge/create-cox-cluster.md @@ -68,13 +68,13 @@ charts: - Environment: The Cox Edge environment to deploy the compute resources. - Update worker pools in parallel: Enable this checkbox if you wish to update worker pool nodes in parallel. -9. Configure the master and worker node pools. The following input fields apply to Cox Edge master and worker node - pools. For a description of input fields that are common across target platforms refer to the +9. Configure the control plane and worker node pools. The following input fields apply to Cox Edge control plane and + worker node pools. For a description of input fields that are common across target platforms refer to the [Node Pools](../../cluster-management/node-pool.md) management page. Click **Next** when you are done.
-#### Master Pool configuration: +#### Control Plane Pool - Cloud Configuration: @@ -92,7 +92,7 @@ network rules, Palette will be unable to deploy the cluster to Cox Edge. ::: -#### Worker Pool configuration: +#### Worker Pool - Cloud Configuration: - Deployment Name: The name to assign the Cox Edge deployment. diff --git a/docs/docs-content/clusters/public-cloud/deploy-k8s-cluster.md b/docs/docs-content/clusters/public-cloud/deploy-k8s-cluster.md index c0199db272..f52b62157a 100644 --- a/docs/docs-content/clusters/public-cloud/deploy-k8s-cluster.md +++ b/docs/docs-content/clusters/public-cloud/deploy-k8s-cluster.md @@ -190,16 +190,16 @@ have selected the **Region** and your **SSH Key Pair Name**, click on **Next**. #### Nodes Configuration -The **Nodes config** section allows you to configure the nodes that make up the control plane (master nodes) and data -plane (worker nodes) of the host cluster. +The **Nodes config** section allows you to configure the nodes that make up the control plane and worker nodes of the +host cluster. Before you proceed to next section, review the following parameters.

-- **Number of nodes in the pool** - This option sets the number of master or worker nodes in the master or worker pool. - For this tutorial, set the count to one for the master pool and two for the worker pool. +- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or + worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. -- **Allow worker capability** - This option allows the master node to also accept workloads. This is useful when spot - instances are used as worker nodes. You can check this box if you want to. +- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when + spot instances are used as worker nodes. You can check this box if you want to. - **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and hourly cost of the instance. Select `m4.2xlarge`. @@ -359,8 +359,8 @@ click on **Next**.
#### Nodes Configuration -The **Nodes config** section allows you to configure the nodes that compose the control plane (master nodes) and data -plane (worker nodes) of the Kubernetes cluster. +The **Nodes config** section allows you to configure the nodes that compose the control plane nodes and worker nodes of +the Kubernetes cluster. Refer to the [Node Pool](../cluster-management/node-pool.md) guide for a list and description of parameters. @@ -368,11 +368,11 @@ Before you proceed to next section, review the following parameters.
-**Number of nodes in the pool** - This option sets the number of master or worker nodes in the master or worker pool. -For this tutorial, set the count to one for both the master and worker pools. +**Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or +worker pool. For this tutorial, set the count to one for both the control plane and worker pools. -**Allow worker capability** - This option allows the master node to also accept workloads. This is useful when spot -instances are used as worker nodes. You can check this box if you want to. +**Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when +spot instances are used as worker nodes. You can check this box if you want to. - **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and hourly cost of the instance. Select **Standard_A8_v2**. @@ -524,8 +524,8 @@ After selecting a **Project**, **Region**, and **SSH Key**, click on **Next**. ### Nodes Configuration -The **Nodes config** section allows you to configure the nodes that make up the control plane (master nodes) and data -plane (worker nodes) of the host cluster. +The **Nodes config** section allows you to configure the nodes that make up the control plane and worker nodes of the +host cluster. Before you proceed to the next section, review the following parameters. @@ -533,11 +533,11 @@ Refer to the [Node Pool](../cluster-management/node-pool.md) guide for a list an Before you proceed to next section, review the following parameters. -- **Number of nodes in the pool** - This option sets the number of master or worker nodes in the master or worker pool. - For this tutorial, set the count to one for the master pool and two for the worker pool. +- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or + worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. -- **Allow worker capability** - This option allows the master node to also accept workloads. This is useful when spot - instances are used as worker nodes. You can check this box if you want to. +- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when + spot instances are used as worker nodes. You can check this box if you want to. - **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and hourly cost of the instance. Select **n1-standard-4**. @@ -832,13 +832,13 @@ docker version Download the tutorial image to your local machine.
```bash -docker pull ghcr.io/spectrocloud/tutorials:1.1.0 +docker pull ghcr.io/spectrocloud/tutorials:1.1.2 ``` Next, start the container, and open a bash session into it.
```shell -docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.0 bash +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.2 bash ``` Navigate to the tutorial code. @@ -873,7 +873,7 @@ Check out the following git tag.
```shell -git checkout v1.1.0 +git checkout v1.1.2 ``` Change the directory to the tutorial code. @@ -1126,13 +1126,13 @@ resource "spectrocloud_cluster_azure" "cluster" { machine_pool { control_plane = true control_plane_as_worker = true - name = "master-pool" - count = var.azure_master_nodes.count - instance_type = var.azure_master_nodes.instance_type - azs = var.azure_master_nodes.azs - is_system_node_pool = var.azure_master_nodes.is_system_node_pool + name = "control-plane-pool" + count = var.azure_control_plane_nodes.count + instance_type = var.azure_control_plane_nodes.instance_type + azs = var.azure_control_plane_nodes.azs + is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool disk { - size_gb = var.azure_master_nodes.disk_size_gb + size_gb = var.azure_control_plane_nodes.disk_size_gb type = "Standard_LRS" } } @@ -1163,7 +1163,7 @@ Variables to populate are identified with `REPLACE_ME`. In the example AWS section below, you would change `deploy-aws = false` to `deploy-aws = true` to deploy to AWS. Additionally, you would replace all the variables with a value `REPLACE_ME`. You can also update the values for nodes in -the master pool or worker pool. +the control plane pool or worker pool.
@@ -1177,7 +1177,7 @@ aws-cloud-account-name = "REPLACE_ME" aws-region = "REPLACE_ME" aws-key-pair-name = "REPLACE_ME" -aws_master_nodes = { +aws_control_plane_nodes = { count = "1" control_plane = true instance_type = "m4.2xlarge" @@ -1328,7 +1328,7 @@ the **Enter** key. Next, issue the following command to stop the container. ```shell docker stop tutorialContainer && \ -docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.0 +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.2 ``` ## Wrap-up diff --git a/docs/docs-content/clusters/public-cloud/gcp/create-gcp-gke-cluster.md b/docs/docs-content/clusters/public-cloud/gcp/create-gcp-gke-cluster.md index 6ce65498e7..aee189e8d1 100644 --- a/docs/docs-content/clusters/public-cloud/gcp/create-gcp-gke-cluster.md +++ b/docs/docs-content/clusters/public-cloud/gcp/create-gcp-gke-cluster.md @@ -68,8 +68,8 @@ Ensure the following requirements are met before you attempt to deploy a cluster 11. The Node configuration page is where you can specify the availability zones (AZ), instance types, disk size, and the number of nodes. Configure the worker node pool. The minimum number of CPUs and amount of memory depend on your - cluster profile, but in general you need at least 4 CPUs and 4 GB of memory both in the master pool and across all - worker pools. + cluster profile, but in general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and + across all worker pools.
diff --git a/docs/docs-content/clusters/public-cloud/gcp/create-gcp-iaas-cluster.md b/docs/docs-content/clusters/public-cloud/gcp/create-gcp-iaas-cluster.md index b259dab14a..d0fd65e331 100644 --- a/docs/docs-content/clusters/public-cloud/gcp/create-gcp-iaas-cluster.md +++ b/docs/docs-content/clusters/public-cloud/gcp/create-gcp-iaas-cluster.md @@ -78,9 +78,9 @@ Ensure the following requirements are met before you attempt to deploy a cluster | **Worker Network**: Select the worker network from the **drop-down Menu**. | 11. The Node configuration page is where you can specify the availability zones (AZ), instance types, disk size, and the - number of nodes. Configure the master and worker node pools. A master and a worker node pool are configured by - default. The minimum number of CPUs and amount of memory depend on your cluster profile, but in general you need at - least 4 CPUs and 4 GB of memory both in the master pool and across all worker pools. + number of nodes. Configure the control plane and worker node pools. A control plane and a worker node pool are + configured by default. The minimum number of CPUs and amount of memory depend on your cluster profile, but in + general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and across all worker pools.
diff --git a/docs/docs-content/devx/apps/deploy-app.md b/docs/docs-content/devx/apps/deploy-app.md index aae801aa17..186c6abd19 100644 --- a/docs/docs-content/devx/apps/deploy-app.md +++ b/docs/docs-content/devx/apps/deploy-app.md @@ -392,7 +392,7 @@ docker version Download the tutorial image to your local machine.
```bash -docker pull ghcr.io/spectrocloud/tutorials:1.0.4 +docker pull ghcr.io/spectrocloud/tutorials:1.1.2 ``` Next, start the container, and open a bash session into it. @@ -400,7 +400,7 @@ Next, start the container, and open a bash session into it.
```shell -docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.0.4 bash +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.2 bash ``` Navigate to the tutorial code. @@ -1205,7 +1205,7 @@ the **Enter** key. Next, issue the following command to stop the container. ```shell docker stop tutorialContainer && \ -docker rmi --force ghcr.io/spectrocloud/tutorials:1.0.4 +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.2 ``` :::info diff --git a/docs/docs-content/enterprise-version/system-management/backup-restore.md b/docs/docs-content/enterprise-version/system-management/backup-restore.md index 59467bfb62..ee38bd679d 100644 --- a/docs/docs-content/enterprise-version/system-management/backup-restore.md +++ b/docs/docs-content/enterprise-version/system-management/backup-restore.md @@ -20,7 +20,7 @@ backed up and can be restored in case of a disaster or a cluster failure. Palett :::warning -Backup and Restore is not supported for self-hosted installations using the Helm Chart. +Backup and Restore is not supported for self-hosted Palette installed through a Helm Chart. ::: @@ -46,15 +46,15 @@ Use the following instructions to configure FTP backup for your enterprise clust 4. Select the **FTP** tab and fill out the following fields: - | **Field** | **Description** | - | -------------------- | ------------------------------------------------------------------------- | - | **Server** | The FTP server URL. | - | **Directory** | The directory name for the backup storage. | - | **Username** | The username to log in to the FTP server. | - | **Password** | The password to log in to the FTP server. | - | **Interval** | The number of days between backups. | - | **Retention Period** | The number of days to retain the backup. | - | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. | + | **Field** | **Description** | + | -------------------- | ------------------------------------------------------------------------- | + | **Server** | The FTP server URL. | + | **Directory** | The directory name for the backup storage. | + | **Username** | The username to log in to the FTP server. | + | **Password** | The password to log in to the FTP server. | + | **Interval** | The number of days between backups. | + | **Retention Period** | The number of days to retain the backup. | + | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. | 5. Click on **Validate** to validate the FTP server configuration. If the validation is successful, the **Save** button is enabled. Otherwise, an error message is displayed. In case of an error, correct verify the FTP server @@ -125,15 +125,15 @@ Use the following instructions to configure S3 backup for your enterprise cluste 4. Select the **FTP** tab and fill out the following fields: - | **Field** | **Description** | - | -------------------- | ------------------------------------------------------------------------- | - | **Server** | The FTP server URL. | - | **Directory** | The directory name for the backup storage. | - | **Username** | The username to log in to the FTP server. | - | **Password** | The password to log in to the FTP server. | - | **Interval** | The number of days between backups. | - | **Retention Period** | The number of days to retain the backup. | - | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. | + | **Field** | **Description** | + | -------------------- | ------------------------------------------------------------------------- | + | **Server** | The FTP server URL. | + | **Directory** | The directory name for the backup storage. | + | **Username** | The username to log in to the FTP server. | + | **Password** | The password to log in to the FTP server. | + | **Interval** | The number of days between backups. | + | **Retention Period** | The number of days to retain the backup. | + | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. | 5. Click on **Validate** to validate the S3 configuration. If the validation is successful, the **Save** button is enabled. Otherwise, an error message is displayed. In case of an error, correct verify the S3 configuration and click diff --git a/docs/docs-content/glossary-all.md b/docs/docs-content/glossary-all.md index 1dc9d6b627..23d0a64f80 100644 --- a/docs/docs-content/glossary-all.md +++ b/docs/docs-content/glossary-all.md @@ -356,11 +356,11 @@ your cluster. ## Workload Cluster -Workload / Tenant / Application Clusters are a collection of master and worker nodes that cooperate to execute container -application workloads. Kubernetes clusters provisioned by users are referred to as Workload Clusters. These clusters are -created within [projects](#project) and they are provisioned and managed in the user's cloud environment. Each cluster -is provisioned from a [Cluster Profile](#cluster-profile) with additional configuration overrides and cloud-specific -settings. +Workload / Tenant / Application Clusters are a collection of control plane and worker nodes that cooperate to execute +container application workloads. Kubernetes clusters provisioned by users are referred to as Workload Clusters. These +clusters are created within [projects](#project) and they are provisioned and managed in the user's cloud environment. +Each cluster is provisioned from a [Cluster Profile](#cluster-profile) with additional configuration overrides and +cloud-specific settings. ## Workspace diff --git a/docs/docs-content/integrations/collectord.md b/docs/docs-content/integrations/collectord.md index 57afbb2970..70fe2ae8dd 100644 --- a/docs/docs-content/integrations/collectord.md +++ b/docs/docs-content/integrations/collectord.md @@ -56,13 +56,12 @@ caPath = caName = ``` -## Components +:::tip -The following workloads gets deployed on collectorforkubernetes namespace, by default +You can find a list of all deployed components in the +[configuration reference](https://www.outcoldsolutions.com/docs/monitoring-kubernetes/v5/configuration/) page. -- Collectorforkubernetes - Daemonset -- Collectorforkubernetes Master - Daemonset -- Collectorforkubernetes Addon - Deployment +::: ## References diff --git a/docs/docs-content/integrations/kibana.md b/docs/docs-content/integrations/kibana.md index e775367753..afe2c55f83 100644 --- a/docs/docs-content/integrations/kibana.md +++ b/docs/docs-content/integrations/kibana.md @@ -18,7 +18,7 @@ from the Kubernetes cluster. The default integration deployed will have the following components: -- ElasticSearch Master (3 replicas). +- ElasticSearch control plane (3 replicas). - ElasticSearch Data (2 replicas). - ElasticSearch Client (2 replicas). - ElasticSearch Curator. diff --git a/docs/docs-content/integrations/ubuntu.md b/docs/docs-content/integrations/ubuntu.md index ecb62482a4..8b61607479 100644 --- a/docs/docs-content/integrations/ubuntu.md +++ b/docs/docs-content/integrations/ubuntu.md @@ -163,7 +163,7 @@ kubeadmconfig: - 'echo "====> Applying kernel parameters for Kubelet"' - "sysctl -p /etc/sysctl.d/90-kubelet.conf" postKubeadmCommands: - # Apply the privileged PodSecurityPolicy on the first master node ; Otherwise, CNI (and other) pods won't come up + # Apply the privileged PodSecurityPolicy on the first control plane node ; Otherwise, CNI (and other) pods won't come up - "export KUBECONFIG=/etc/kubernetes/admin.conf" # Sometimes api server takes a little longer to respond. Retry if applying the pod-security-policy manifest fails - '[ -f "$KUBECONFIG" ] && { echo " ====> Applying PodSecurityPolicy" ; until $(kubectl apply -f @@ -466,7 +466,7 @@ kubeadmconfig: - 'echo "====> Applying kernel parameters for Kubelet"' - "sysctl -p /etc/sysctl.d/90-kubelet.conf" postKubeadmCommands: - # Apply the privileged PodSecurityPolicy on the first master node ; Otherwise, CNI (and other) pods won't come up + # Apply the privileged PodSecurityPolicy on the first control plane node ; Otherwise, CNI (and other) pods won't come up - "export KUBECONFIG=/etc/kubernetes/admin.conf" # Sometimes api server takes a little longer to respond. Retry if applying the pod-security-policy manifest fails - '[ -f "$KUBECONFIG" ] && { echo " ====> Applying PodSecurityPolicy" ; until $(kubectl apply -f diff --git a/docs/docs-content/registries-and-packs/deploy-pack.md b/docs/docs-content/registries-and-packs/deploy-pack.md index 546ca703ba..dd62653dd8 100644 --- a/docs/docs-content/registries-and-packs/deploy-pack.md +++ b/docs/docs-content/registries-and-packs/deploy-pack.md @@ -93,17 +93,17 @@ the currently active containers. docker ps ``` -Use the following command to download the `ghcr.io/spectrocloud/tutorials:1.0.11` image to your local machine. This +Use the following command to download the `ghcr.io/spectrocloud/tutorials:1.1.2` image to your local machine. This Docker image includes the necessary tools. ```bash -docker pull ghcr.io/spectrocloud/tutorials:1.0.11 +docker pull ghcr.io/spectrocloud/tutorials:1.1.2 ``` Next, start the container and open a bash session into it. ```bash -docker run --name tutorialContainer --publish 7000:5000 --interactive --tty ghcr.io/spectrocloud/tutorials:1.0.11 bash +docker run --name tutorialContainer --publish 7000:5000 --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.2 bash ``` If the port 7000 on your local machine is unavailable, you can use any other port of your choice.
@@ -923,24 +923,24 @@ Click **Next** to continue. #### Nodes config -In the **Nodes config** section, provide the details for the master and worker pools. For this tutorial, you can use the -following minimal configuration: +In the **Nodes config** section, provide the details for the control plane and worker pools. For this tutorial, you can +use the following minimal configuration: -| **Field** | **Value for the master-pool** | **Value for the worker-pool** | -| --------------------------- | ----------------------------- | ------------------------------------------------------------------------- | -| Node pool name | master-pool | worker-pool | -| Number of nodes in the pool | `1` | `1` | -| Allow worker capability | Checked | Not applicable | -| Enable Autoscaler | Not applicable | No | -| Rolling update | Not applicable | Expand First.
Launch a new node first, then shut down the old one. | +| **Field** | **Value for the control-plane-pool** | **Value for the worker-pool** | +| --------------------------- | ------------------------------------ | ------------------------------------------------------------------------- | +| Node pool name | control-plane-pool | worker-pool | +| Number of nodes in the pool | `1` | `1` | +| Allow worker capability | Checked | Not applicable | +| Enable Autoscaler | Not applicable | No | +| Rolling update | Not applicable | Expand First.
Launch a new node first, then shut down the old one. | -Keep the **Cloud Configuration** the same for both master and worker pools. +Keep the **Cloud Configuration** the same for both control plane and worker pools. -| **Field** | **Value** | -| ------------------ | --------------------------------------------------------------------------------------------------------- | -| Instance Type | General purpose `m4.xlarge`
A minimum allocation of four CPU cores is required for the master node. | -| Availability zones | Choose any _one_ availability zone.
This tutorial uses the `us-east-1a` availability zone. | -| Disk size | 60 GiB | +| **Field** | **Value** | +| ------------------ | ---------------------------------------------------------------------------------------------------------------- | +| Instance Type | General purpose `m4.xlarge`
A minimum allocation of four CPU cores is required for the control plane node. | +| Availability zones | Choose any _one_ availability zone.
This tutorial uses the `us-east-1a` availability zone. | +| Disk size | 60 GiB | Click **Next** to continue. @@ -1334,7 +1334,7 @@ the following commands. ```bash docker container rm --force tutorialContainer -docker image rm --force ghcr.io/spectrocloud/tutorials:1.0.11 +docker image rm --force ghcr.io/spectrocloud/tutorials:1.1.2 ```
diff --git a/docs/docs-content/registries-and-packs/pack-constraints.md b/docs/docs-content/registries-and-packs/pack-constraints.md index e89841de05..996cd7199d 100644 --- a/docs/docs-content/registries-and-packs/pack-constraints.md +++ b/docs/docs-content/registries-and-packs/pack-constraints.md @@ -477,15 +477,16 @@ name of the replica count defined in the `values.yaml` -Kubernetes provides a way to schedule the pods on master/worker nodes or both. Pack Constraints framework must know -where the pods are scheduled because the resource validation validates only the master machine pool when the pods are -scheduled on master nodes. Similarily, if the pods are scheduled on worker nodes, then only the worker machine pool will -be validated. In the case of daemon sets, the pods are scheduled in both master and worker nodes, and the framework -validates both master and worker machine pool configurations before the cluster is submitted for deployment. - -- master - pods are scheduled only on master nodes -- worker - pods are scheduled only on worker nodes -- all - pods are scheduled on both master and worker nodes +Kubernetes provides a way to schedule the pods on the control plane and worker nodes. Pack Constraints framework must +know where the pods are scheduled because the resource validation validates only the control plane machine pool when the +pods are scheduled on control plane nodes. Similarly, if the pods are scheduled on worker nodes, then only the worker +machine pool will be validated. In the case of daemon sets, the pods are scheduled in both control plane and worker +nodes, and the framework validates both control plane and worker machine pool configurations before the cluster is +submitted for deployment. + +- `master` - pods are scheduled only on control plane nodes +- `worker` - pods are scheduled only on worker nodes +- `all` - pods are scheduled on both control plane and worker nodes diff --git a/styleguide/spectro-cloud-style-guide.md b/styleguide/spectro-cloud-style-guide.md index d294994d11..8e072eeab4 100644 --- a/styleguide/spectro-cloud-style-guide.md +++ b/styleguide/spectro-cloud-style-guide.md @@ -81,7 +81,7 @@ simplified language improves technical documentation. | Good ✅ | Bad ❌ | | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | The core Kubernetes API is flexible and can also be extended to support custom resources. | The interior Kubernetes API is malleable and provides the capability for consumers to extended custom logic and inject custom logical resources. | -| Choose a node to be the cluster master node. | Designate a node to be the cluster master node. | +| Choose a node to be the cluster control plane node. | Designate a node to be the cluster primary node. | | Drain the node before a version upgrade. | It is essential to drain the node prior to a version upgrade. | ### SpectroCloud Voice diff --git a/vale/styles/spectrocloud/inclusive.yml b/vale/styles/spectrocloud/inclusive.yml index 444e8de94e..6673192692 100644 --- a/vale/styles/spectrocloud/inclusive.yml +++ b/vale/styles/spectrocloud/inclusive.yml @@ -1,6 +1,6 @@ extends: existence message: "Consider avoiding '%s' in favor of more inclusive language." -link: "http://example.com/inclusivity-guidelines" +link: "https://spectrocloud.atlassian.net/wiki/x/AQBCaQ" ignorecase: true level: error tokens: