Skip to content

Commit

Permalink
docs: remove master/slave DOC-1033 PEM-4404
Browse files Browse the repository at this point in the history
  • Loading branch information
karl-cardenas-coding authored Jan 26, 2024
1 parent ff67c04 commit dbf675d
Show file tree
Hide file tree
Showing 30 changed files with 197 additions and 195 deletions.
6 changes: 3 additions & 3 deletions docs/docs-content/audit-logs/kube-api-audit-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ request to facilitate auditing. Memory consumption depends on the audit logging

- Write access to the file system.

- Remote access to the Kubernetes cluster master nodes.
- Remote access to the Kubernetes cluster control plane nodes.

## Enable Auditing

Expand All @@ -50,7 +50,7 @@ levels, visit the Kubernetes API

2. Identify one of your cluster contro-plane nodes. You can find a cluster node by navigating to the left **Main Menu**
and selecting **Clusters**. Click on your cluster to access the details pages and click on the **Nodes** tab. The tab
contains information about each pool, select a node from the **Master Pool** to view its IP address.
contains information about each pool, select a node from the **Control Plane Pool** to view its IP address.

3. SSH into one of your control-plane nodes using its IP address and the SSH key you specified during the cluster
creation process.
Expand Down Expand Up @@ -175,7 +175,7 @@ parameter.

2. Identify one of your cluster contro-plane nodes. You find a cluster node by navigating to the left **Main Menu** and
selecting **Clusters**. Click on your cluster to access the details pages and click on the **Nodes** tab. The tab
contains information about each pool, select a node from the **Master Pool** to view its IP address.
contains information about each pool, select a node from the **Control Plane Pool** to view its IP address.

3. SSH into one of your control-plane nodes using its IP address and the SSH key you specified during the cluster
creation process.
Expand Down
22 changes: 11 additions & 11 deletions docs/docs-content/clusters/cluster-management/cloud-cost.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,22 +18,22 @@ done based on the instance type and storage type selected for each machine pool.
| | **FORMULAS FOR CALCULATION** |
| --- | ------------------------------------------------------------------------------------------- |
| | Machine Pool Cost = ( Number of Nodes X Instance Price ) + ( Storage Size X Storage Price ) |
| | Cluster Cloud Cost = Master Pool Cost + Worker Pool Cost |
| | Cluster Cloud Cost = control plane pool cost + worker pool cost |

**Example 1:**

Let's assume that a cluster ‘demo’ is launched with two machine pools with the following configuration:

| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST |
| ------------ | ---- | --------------------------- | ---------------------------- |
| MASTER POOL | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) |
| WORKER POOL | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) |
| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST |
| ------------- | ---- | --------------------------- | ---------------------------- |
| Control Plane | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) |
| Worker Pool | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) |

| Calculation for the above scenario |
| --------------------------------------------------------------------- |
| master-pool cost = ( 3 X $0.0496 ) + ( 60 X $0.00014 ) = $0.1572/hour |
| worker-pool cost = ( 3 X $0.0992 ) + ( 60 X $0.00014 ) = $0.306/hour |
| Cluster Cloud Cost = $0.1572 + $0.306 = $0.4632/hour |
| Calculation for the above scenario |
| ---------------------------------------------------------------------------- |
| control-plane-pool cost = ( 3 X $0.0496 ) + ( 60 X $0.00014 ) = $0.1572/hour |
| worker-pool cost = ( 3 X $0.0992 ) + ( 60 X $0.00014 ) = $0.306/hour |
| Cluster Cloud Cost = $0.1572 + $0.306 = $0.4632/hour |

:::info

Expand Down Expand Up @@ -63,7 +63,7 @@ category.

**Example 2**

For the cluster configuration of master-pool & worker-pool considers in example 1,
For the cluster configuration of control-plane-pool and worker-pool considers in example 1,

| Calculation for the example scenario |
| ------------------------------------------------------------------------------- |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ This scan examines the compliance of deployed Kubernetes security features again
Kubernetes Benchmarks are consensus-driven security guidelines for the Kubernetes. Different releases of the CIS
benchmark cover different releases of Kubernetes. By default, Kubernetes configuration security will determine the test
set based on the Kubernetes version running on the cluster being scanned. Internally, Palette leverages an open-source
tool called KubeBench from Aqua Security to perform this scan. Scans are run against master and worker nodes of the
Kubernetes cluster, and a combined report is made available on the UI. Users can filter the report to view only the
master or worker results if required.
tool called KubeBench from Aqua Security to perform this scan. Scans are run against control plane and worker nodes of
the Kubernetes cluster, and a combined report is made available on the UI. Users can filter the report to view only the
control plane or worker results if required.

All the tests in the report are marked as Scored or Not Scored. The ones marked Not Scored cannot be automatically run,
and it is suggested to be tested manually.
Expand Down
13 changes: 7 additions & 6 deletions docs/docs-content/clusters/cluster-management/node-pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,10 @@ Different types of repaving operations may occur, depending on what causes them:
Kubernetes layer impact all nodes, such as when upgrading to a different Kubernetes version. All nodes across all
pools are sequentially repaved starting with the control plane.

You can customize the repave time interval for all node pools except the master pool. The default repave time interval
is 0 seconds. You can adjust the node repave time interval during or after cluster creation. If you need to modify the
repave time interval post-cluster creation, follow the [Change a Node Pool](#change-a-node-pool) instructions below.
You can customize the repave time interval for all node pools except the control plane pool. The default repave time
interval is 0 seconds. You can adjust the node repave time interval during or after cluster creation. If you need to
modify the repave time interval post-cluster creation, follow the [Change a Node Pool](#change-a-node-pool) instructions
below.

## Node Pool Configuration Settings

Expand All @@ -51,13 +52,13 @@ settings may not be available.

<br />

### Master Node Pool
### Control Plane Node Pool

| **Property** | **Description** |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Node pool name** | A descriptive name for the node pool. |
| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. For the master pool, this number can be 1, 3, or 5. |
| **Allow worker capability** | Select this option to allow workloads to be provisioned on master nodes. |
| **Number of nodes in the pool** | Number of nodes to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. |
| **Allow worker capability** | Select this option to allow workloads to be provisioned on control plane nodes. |
| **Additional Labels** | Optional labels apply placement constraints on a pod. For example, you can add a label to make a node eligible to receive the workload. To learn more, refer to the [Overview on Labels](taints.md#labels). |
| **Taints** | Sets toleration to pods and allows (but does not require) the pods to schedule onto nodes with matching taints. To learn more, refer to the [Overview on Taints](taints.md#taints). |
| **Availability Zones** | The Availability Zones from which to select available servers for deployment. If you select multiple zones, Palette will deploy servers evenly across them as long as sufficient servers are available to do so. |
Expand Down
4 changes: 2 additions & 2 deletions docs/docs-content/clusters/cluster-management/reconfigure.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ cluster:

:::info

The master node pool is scaled from 1 to 3 or 3 to 5 nodes, etc. However, the scale-down operation is not supported for
master nodes.
The control plane node pool is scaled from 1 to 3 or 3 to 5 nodes, etc. However, the scale-down operation is not
supported for control plane nodes.

:::

Expand Down
10 changes: 5 additions & 5 deletions docs/docs-content/clusters/cluster-management/taints.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,11 @@ circumstances where you may want to control which node the pod deploys to - for
a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the
same availability zone.

Palette enables our users to Label the nodes of a master and worker pool by using key/value pairs. These labels do not
directly imply anything to the semantics of the core system but are intended to be used by users to drive use cases
where pod affinity to specific nodes is desired. Labels can be attached to node pools in a cluster during creation and
can be subsequently added and modified at any time. Each node pool can have a set of key/value labels defined. The key
must be unique across all node pools for a given cluster.
Palette enables our users to Label the nodes of a control plane and worker pool by using key/value pairs. These labels
do not directly imply anything to the semantics of the core system but are intended to be used by users to drive use
cases where pod affinity to specific nodes is desired. Labels can be attached to node pools in a cluster during creation
and can be subsequently added and modified at any time. Each node pool can have a set of key/value labels defined. The
key must be unique across all node pools for a given cluster.

### Apply Labels to Nodes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ cd tutorials/
Check out the following git tag.

```shell
git checkout v1.1.0
git checkout v1.1.2
```

Change the directory to the tutorial code.
Expand Down Expand Up @@ -736,13 +736,13 @@ resource "spectrocloud_cluster_azure" "azure-cluster" {
machine_pool {
control_plane = true
control_plane_as_worker = true
name = "master-pool"
count = var.azure_master_nodes.count
instance_type = var.azure_master_nodes.instance_type
azs = var.azure-use-azs ? var.azure_master_nodes.azs : [""]
is_system_node_pool = var.azure_master_nodes.is_system_node_pool
name = "control-plane-pool"
count = var.azure_control_plane_nodes.count
instance_type = var.azure_control_plane_nodes.instance_type
azs = var.azure-use-azs ? var.azure_control_plane_nodes.azs : [""]
is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool
disk {
size_gb = var.azure_master_nodes.disk_size_gb
size_gb = var.azure_control_plane_nodes.disk_size_gb
type = "Standard_LRS"
}
}
Expand Down
6 changes: 3 additions & 3 deletions docs/docs-content/clusters/clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,14 @@ available to the users to apply to their existing clusters at a time convenient
### Kubernetes

Kubernetes components and configuration are hardened in accordance with the Kubernetes CIS Benchmark. Palette executes
Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the master and worker nodes are
configured securely.
Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the control plane and worker
nodes are configured securely.

### Cloud Infrastructure

Palette follows security best practices recommended by the various cloud providers when provisioning and configuring the
computing, network, and storage infrastructure for the Kubernetes clusters. These include practices such as isolating
master and worker nodes in dedicated network domains and limiting access through the use constructs like security
control plane and worker nodes in dedicated network domains and limiting access through the use constructs like security
groups.

:::info
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,11 @@ To deploy a new MAAS cluster:

9. Select a domain from the **Domain drop-down Menu** and click **Next**.

10. Configure the master and worker node pools. The following input fields apply to MAAS master and worker node pools.
For a description of input fields that are common across target platforms refer to the
10. Configure the control plane and worker node pools. The following input fields apply to MAAS control plane and worker
node pools. For a description of input fields that are common across target platforms refer to the
[Node Pools](../../cluster-management/node-pool.md) management page. Click **Next** when you are done.

#### Master Pool configuration
#### Control Plane Pool configuration

- Cloud configuration:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,12 +87,12 @@ The following applies when replacing variables within curly braces in the YAML c

:::

9. In the Node pool configuration YAML files for the master and worker pools, edit the files to replace each occurrence
of the variables within curly braces listed in the tables below with values that apply to your Nutanix cloud
environment. You can configure scaling in the Palette UI by specifying the number of nodes in the pool. This
9. In the Node pool configuration YAML files for the control plane and worker pools, edit the files to replace each
occurrence of the variables within curly braces listed in the tables below with values that apply to your Nutanix
cloud environment. You can configure scaling in the Palette UI by specifying the number of nodes in the pool. This
corresponds to `replicas` in the YAML file.

#### Master Pool
#### Control Plane Pool

| **Variable** | **Description** |
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
Expand Down
10 changes: 5 additions & 5 deletions docs/docs-content/clusters/data-center/openstack.md
Original file line number Diff line number Diff line change
Expand Up @@ -619,16 +619,16 @@ The following steps need to be performed to provision a new OpenStack cluster:
- Subnet CIDR
- DNS Name Server

5. Configure the master and worker node pools. Fill out the input fields in the **Add node pool** page. The following
table contains an explanation of the available input parameters.
5. Configure the control plane and worker node pools. Fill out the input fields in the **Add node pool** page. The
following table contains an explanation of the available input parameters.

### Master Pool
### Control Plane Pool

| **Parameter** | **Description** |
| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | A descriptive name for the node pool. |
| **Size** | Number of VMs to be provisioned for the node pool. For the master pool, this number can be 1, 3, or 5. |
| **Allow worker capability** | Select this option for allowing workloads to be provisioned on master nodes. |
| **Size** | Number of VMs to be provisioned for the node pool. For the control plane pool, this number can be 1, 3, or 5. |
| **Allow worker capability** | Select this option for allowing workloads to be provisioned on control plane nodes. |
| **[Labels](../cluster-management/taints.md#labels)** | Add a label to apply placement constraints on a pod, such as a node eligible for receiving the workload. |
| **[Taints](../cluster-management/taints.md#taints)** | To set toleration to pods and allow (but do not require) the pods to schedule onto nodes with matching taints. |
| **Instance type** | Select the compute instance type to be used for all nodes in the node pool. |
Expand Down
Loading

0 comments on commit dbf675d

Please sign in to comment.