-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Language Clean-up #2089
Language Clean-up #2089
Conversation
✅ Deploy Preview for docs-spectrocloud ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
- all - pods are scheduled on both master and worker nodes | ||
Kubernetes provides a way to schedule the pods on the control plane and worker nodes. Pack Constraints framework must | ||
know where the pods are scheduled because the resource validation validates only the control plane machine pool when the | ||
pods are scheduled on control plane nodes. Similarily, if the pods are scheduled on worker nodes, then only the worker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'Similarily'?
docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md
Outdated
Show resolved
Hide resolved
docs/docs-content/clusters/edge/site-deployment/site-installation/cluster-deployment.md
Outdated
Show resolved
Hide resolved
[instance cost type](architecture.md#spot-instances), disk size, and the number of nodes. Click on **Next** after | ||
you have completed configuring the node pool. The minimum number of CPUs and amount of memory depend on your cluster | ||
profile, but in general you need at least 4 CPUs and 4 GB of memory both in the master pool and across all worker | ||
pools. | ||
profile, but in general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and across all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'CPUs'?
least 4 CPUs and 4 GB of memory both in the master pool and across all worker pools. | ||
number of nodes. Configure the control plane and worker node pools. A control plane and a worker node pool are | ||
configured by default. The minimum number of CPUs and amount of memory depend on your cluster profile, but in | ||
general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and across all worker pools. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'CPUs'?
@@ -68,8 +68,8 @@ Ensure the following requirements are met before you attempt to deploy a cluster | |||
|
|||
11. The Node configuration page is where you can specify the availability zones (AZ), instance types, disk size, and the | |||
number of nodes. Configure the worker node pool. The minimum number of CPUs and amount of memory depend on your | |||
cluster profile, but in general you need at least 4 CPUs and 4 GB of memory both in the master pool and across all | |||
worker pools. | |||
cluster profile, but in general you need at least 4 CPUs and 4 GB of memory both in the control plane pool and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'CPUs'?
| WORKER POOL | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) | | ||
| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST | | ||
| ------------- | ---- | --------------------------- | ---------------------------- | | ||
| Control Plane | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Google.Units] Put a nonbreaking space between the number and the unit in '60GB'.
| MACHINE POOL | SIZE | INSTANCE TYPE WITH COST | ROOT DISK WITH COST | | ||
| ------------- | ---- | --------------------------- | ---------------------------- | | ||
| Control Plane | 3 | AWS t2.medium($0.0496/hour) | 60GB - gp2($0.00014/GB/hour) | | ||
| Worker Pool | 3 | AWS t2.large($0.0992/hour) | 60GB - gp2($0.00014/GB/hour) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Google.Units] Put a nonbreaking space between the number and the unit in '60GB'.
tool called KubeBench from Aqua Security to perform this scan. Scans are run against master and worker nodes of the | ||
Kubernetes cluster, and a combined report is made available on the UI. Users can filter the report to view only the | ||
master or worker results if required. | ||
tool called KubeBench from Aqua Security to perform this scan. Scans are run against control plane and worker nodes of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[spectrocloud.ableism] Avoid using ableism terms. Use 'issue' instead of 'run'.
@@ -65,14 +65,14 @@ available to the users to apply to their existing clusters at a time convenient | |||
### Kubernetes | |||
|
|||
Kubernetes components and configuration are hardened in accordance with the Kubernetes CIS Benchmark. Palette executes | |||
Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the master and worker nodes are | |||
configured securely. | |||
Kubebench, a CIS Benchmark scanner by Aqua Security, for every Kubernetes pack to ensure the control plane and worker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚫 [vale] reported by reviewdog 🐶
[Vale.Spelling] Did you really mean 'Kubebench'?
💔 Some backports could not be created
Note: Successful backport PRs will be merged automatically after passing CI. Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation and see the Github Action logs for details |
🎉 This PR is included in version 4.2.1 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
🎉 This PR is included in version 4.2.1 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Describe the Change
This PR updates the documentation code base by removing all instances of master/slave. There is one instance that we cannot remove yet and that is due to PEM-4430
❗ Dependent on PR 34 ✅
Review Changes
💻 Preview URL
🎫 DOC-1033