diff --git a/docs/README.md b/docs/README.md
index dd3290b0f..821a63c25 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -16,7 +16,7 @@ $ yarn
$ yarn start
```
-This command starts a local development server and open up a browser window. Most changes are reflected live without having to restart the server.
+This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
diff --git a/docs/pages/advanced-topics/plugins-development.mdx b/docs/pages/advanced-topics/plugins-development.mdx
index d92951ae0..0da758e04 100644
--- a/docs/pages/advanced-topics/plugins-development.mdx
+++ b/docs/pages/advanced-topics/plugins-development.mdx
@@ -3,7 +3,7 @@ title: "Development tutorial"
sidebar_label: "Development tutorial"
---
-In this tutorial we will implement a ConfigMap syncer. Vcluster syncs ConfigMaps out of the box, but only those that are used by one of the pods created in vCluster. Here we will have a step-by-step look at a plugin implementation that will synchronize all ConfigMaps using the [vcluster plugin SDK](https://github.com/loft-sh/vcluster-sdk).
+In this tutorial, we will implement a ConfigMap syncer. Vcluster syncs ConfigMaps out of the box, but only those that are used by one of the pods created in vCluster. Here we will have a step-by-step look at a plugin implementation that will synchronize all ConfigMaps using the [vcluster plugin SDK](https://github.com/loft-sh/vcluster-sdk).
### Prerequisites
@@ -22,7 +22,7 @@ Check out the vCluster plugin example via:
git clone https://github.com/loft-sh/vcluster-plugin-example.git
```
-You'll see a bunch of files already created, but lets take a look at the `main.go` file:
+You'll see a bunch of files already created, but let's take a look at the `main.go` file:
```
package main
@@ -82,7 +82,7 @@ You can get more familiar with the interfaces mentioned above by reading the SDK
:::
-The `SyncDown` function mentioned above is called by the vCluster SDK when a given resource, e.g. a ConfigMap, is created in the vCluster, but it doesn't exist in the host cluster yet. To create a ConfigMap in the host cluster we will call the `SyncDownCreate` function with the output of the `translate` function as third parameter. This demonstrates a typical pattern used in the vCluster syncer implementations.
+The `SyncDown` function mentioned above is called by the vCluster SDK when a given resource, e.g. a ConfigMap, is created in the vCluster, but it doesn't exist in the host cluster yet. To create a ConfigMap in the host cluster we will call the `SyncDownCreate` function with the output of the `translate` function as a third parameter. This demonstrates a typical pattern used in the vCluster syncer implementations.
```
func (s *configMapSyncer) SyncDown(ctx *syncercontext.syncercontext, vObj client.Object) (ctrl.Result, error) {
@@ -93,10 +93,10 @@ func (s *configMapSyncer) translate(vObj client.Object) *corev1.ConfigMap {
return s.TranslateMetadata(vObj).(*corev1.ConfigMap)
}
```
-The `TranslateMetadata` function used above produces a ConfigMap object that will be created in the host cluster. It is a deep copy of the ConfigMap from vCluster, but with certain metadata modifications - the name and labels are transformed, some vCluster labels and annotations are added, many metadata fields are stripped (uid, resourceVersion, etc.).
+The `TranslateMetadata` function used above produces a ConfigMap object that will be created in the host cluster. It is a deep copy of the ConfigMap from vCluster, but with certain metadata modifications - the name and labels are transformed, some vCluster labels and annotations are added, and many metadata fields are stripped (uid, resourceVersion, etc.).
-Next, we need to implement code that will handle the updates of the ConfigMap. When a ConfigMap in vCluster or host cluster is updated, the vCluster SDK will call the `Sync` function of the syncer. Current ConfigMap resource from the host cluster and from vCluster are passed as the second and third parameters respectively. In the implementation below, you can see another pattern used by the vCluster syncers. The `translateUpdate` function will return nil when no change to the ConfigMap in the host cluster is needed, and the `SyncDownUpdate` function will not do an unnecessary update API call in such case.
+Next, we need to implement code that will handle the updates of the ConfigMap. When a ConfigMap in vCluster or host cluster is updated, the vCluster SDK will call the `Sync` function of the syncer. Current ConfigMap resources from the host cluster and from vCluster are passed as the second and third parameters respectively. In the implementation below, you can see another pattern used by the vCluster syncers. The `translateUpdate` function will return nil when no change to the ConfigMap in the host cluster is needed, and the `SyncDownUpdate` function will not do an unnecessary update API call in such case.
```
@@ -129,14 +129,14 @@ func (s *configMapSyncer) translateUpdate(pObj, vObj *corev1.ConfigMap) *corev1.
}
```
-As you might have noticed, the changes to the Immutable field of the ConfigMap are not being checked and propagated to the updated ConfigMap. That is done just for the simplification of the code in this tutorial. In the real world use cases, there will likely be many scenarios and edge cases that you will need to handle differently than just with a simple comparison and assignment. For example, you will need to look out for label selectors that are interpreted in the host cluster, e.g. pod selectors in the NetworkPolicy resources are interpreted by the host cluster network plugin. Such selectors must be translated when synced down to the host resources. Several functions for the common use cases are [built into the SDK in the `syncer/translator` package](https://pkg.go.dev/github.com/loft-sh/vcluster-sdk/syncer/translator#pkg-functions), including the `TranslateLabelSelector` function.
+As you might have noticed, the changes to the Immutable field of the ConfigMap are not being checked and propagated to the updated ConfigMap. That is done just for the simplification of the code in this tutorial. In real world use cases, there will likely be many scenarios and edge cases that you will need to handle differently than just with a simple comparison and assignment. For example, you will need to look out for label selectors that are interpreted in the host cluster, e.g. pod selectors in the NetworkPolicy resources are interpreted by the host cluster network plugin. Such selectors must be translated when synced down to the host resources. Several functions for the common use cases are [built into the SDK in the `syncer/translator` package](https://pkg.go.dev/github.com/loft-sh/vcluster-sdk/syncer/translator#pkg-functions), including the `TranslateLabelSelector` function.
-Also, notice that this example lacks the updates to the ConfigMap resource in vCluster. Here we propagate the changes only down to the ConfigMap in the host cluster, but there are resources or use cases where a syncer would update the synced resource in vCluster. For example, this might be an update of the status subresource or synchronization of any other field that some controller sets on the host side, e.g., finalizers. Implementation of such updates needs to be considered on case-by-case basis.
+Also, notice that this example lacks the updates to the ConfigMap resource in vCluster. Here we propagate the changes only down to the ConfigMap in the host cluster, but there are resources or use cases where a syncer would update the synced resource in vCluster. For example, this might be an update of the status subresource or synchronization of any other field that some controller sets on the host side, e.g., finalizers. Implementation of such updates needs to be considered on a case-by-case basis.
For some use cases, you may need to sync the resources in the opposite direction, from the host cluster up into the vCluster, or even in both directions. If that is what your plugin needs to do, you will implement the [`UpSyncer`](https://pkg.go.dev/github.com/loft-sh/vcluster-sdk/syncer#UpSyncer) interface defined by the SDK.
### Adding a hook for changing a resource on the fly
-Hooks are a great feature to adjust current syncing behaviour of vCluster without the need to override an already existing syncer in vCluster completely. They allow you to change outgoing objects of vCluster similar to an mutating admission controller in Kubernetes. Requirement for an hook to work correctly is that vCluster itself would sync the resource, so hooks only work for the core resources that are synced by vCluster such as pods, services, secrets etc.
+Hooks are a great feature to adjust the current syncing behaviour of vCluster without the need to override an already existing syncer in vCluster completely. They allow you to change outgoing objects of vCluster similar to an mutating admission controller in Kubernetes. The requirement for a hook to work correctly is that vCluster itself would sync the resource, so hooks only work for the core resources that are synced by vCluster such as pods, services, secrets etc.
To add a hook to your plugin, you simply need to create a new struct that implements the `ClientHook` interface:
diff --git a/docs/pages/advanced-topics/plugins-overview.mdx b/docs/pages/advanced-topics/plugins-overview.mdx
index e7ae5f5d6..2bbde94eb 100644
--- a/docs/pages/advanced-topics/plugins-overview.mdx
+++ b/docs/pages/advanced-topics/plugins-overview.mdx
@@ -45,9 +45,9 @@ For this use case you can label resources vCluster should ignore either on the p
### Plugin Hooks
-Plugin hooks are a great feature to adjust current syncing behaviour of vCluster without the need to override an already existing syncer in vCluster completely.
+Plugin hooks are a great feature to adjust the current syncing behaviour of vCluster without the need to override an already existing syncer in vCluster completely.
They allow you to change outgoing objects of vCluster similar to an mutating admission controller in Kubernetes.
-Requirement for an hook to work correctly is that vCluster itself would sync the resource, so hooks only work for the core resources that are synced by vCluster such as pods, services, secrets etc.
+The requirement for a hook to work correctly is that vCluster itself would sync the resource, so hooks only work for the core resources that are synced by vCluster such as pods, services, secrets etc.
If a plugin registers a hook to a specific resource, vCluster will forward all requests that match the plugin's defined hooks to the plugin and the plugin can then adjust or even deny the request completely.
This opens up a wide variety of adjustment possibilities for plugins, where you for example only want to add a custom label or annotation.
@@ -59,7 +59,7 @@ If you want to start developing your own vCluster plugins, it is recommended tha
:::
vCluster provides an [SDK](https://github.com/loft-sh/vcluster-sdk) for writing plugin controllers that abstracts a lot of the syncer complexity away from the user, but still gives you access to the underlying structures if you need it.
-Internally, the vCluster SDK uses the popular [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) project, that is used by vCluster itself to create the controllers.
+Internally, the vCluster SDK uses the popular [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) project, which is used by vCluster itself to create the controllers.
The vCluster SDK lets you write custom plugin controllers with just a few lines of code.
Since the plugin SDK interfaces are mostly compatible with the vCluster syncers, you can also take a look at how those are implemented in [the vCluster itself](https://github.com/loft-sh/vcluster/tree/main/pkg/controllers/resources), which work in most cases the same way as if those would be implemented in a plugin.
@@ -91,7 +91,7 @@ plugin:
# ...
```
-The `plugin.yaml` is a valid helm values file used to define the plugin's sidecar configuration and additional RBAC rules needed to function properly. If you want to distribute that plugin for others, it's also possible to install a plugin through an URL:
+The `plugin.yaml` is a valid helm values file used to define the plugin's sidecar configuration and additional RBAC rules needed to function properly. If you want to distribute that plugin to others, it's also possible to install a plugin through a URL:
```
# Install a plugin with a local plugin.yaml
diff --git a/docs/pages/architecture/control_plane/control_plane.mdx b/docs/pages/architecture/control_plane/control_plane.mdx
index 16e677a8c..d0628f262 100644
--- a/docs/pages/architecture/control_plane/control_plane.mdx
+++ b/docs/pages/architecture/control_plane/control_plane.mdx
@@ -3,7 +3,7 @@ title: vCluster Control Plane
sidebar_label: vCluster Control Plane
---
-This container contains API server, controller manager and a connection (or mount) of the data store. By default, vClusters use sqlite as data store and run the API server and controller manager of k3s, which is a certified Kubernetes distribution and CNCF sandbox project. You can also use a [different data store, such as etcd, mysql or postgresql](../../deploying-vclusters/persistence.mdx). You are also able to use another Kubernetes distribution as backing virtual cluster, such as [k0s or vanilla k8s](../../using-vclusters/access.mdx).
+This container contains an API server, controller manager, and a connection (or mount) of the data store. By default, vClusters uses sqlite as a data store and runs the API server and controller manager of k3s, which is a certified Kubernetes distribution and CNCF sandbox project. You can also use a [different data stores, such as etcd, mysql or postgresql](../../deploying-vclusters/persistence.mdx). You are also able to use another Kubernetes distribution as a backing virtual cluster, such as [k0s or vanilla k8s](../../using-vclusters/access.mdx).
Each vCluster has its own control plane consisting of:
- **Kubernetes API** server (point your kubectl requests to this vCluster API server)
diff --git a/docs/pages/architecture/control_plane/k8s_distros.mdx b/docs/pages/architecture/control_plane/k8s_distros.mdx
index 71974af80..6a4eb67f4 100644
--- a/docs/pages/architecture/control_plane/k8s_distros.mdx
+++ b/docs/pages/architecture/control_plane/k8s_distros.mdx
@@ -4,15 +4,15 @@ sidebar_label: Kubernetes distributions
---
-By default, vCluster will use [k3s](https://github.com/k3s-io/k3s) as virtual Kubernetes cluster, which is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
+By default, vCluster will use [k3s](https://github.com/k3s-io/k3s) as a virtual Kubernetes cluster, which is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
-However, vCluster is not tied to a specific distribution and should work with all certified Kubernetes distributions. By default, we recommend to use k3s, because it has a small footprint and widely adopted, but if your use case requires a different k8s distribution, vCluster currently also supports k0s or vanilla k8s. If that is also not enough, you can also add your custom Kubernetes distribution as outlined below.
+However, vCluster is not tied to a specific distribution and should work with all certified Kubernetes distributions. By default, we recommend using k3s, because it has a small footprint and is widely adopted, but if your use case requires a different k8s distribution, vCluster currently also supports k0s or vanilla k8s. If that is not enough, you can also add your custom Kubernetes distribution as outlined below.
## k0s
-[k0s](https://github.com/k0sproject/k0s) is an all-inclusive Kubernetes distribution, which is configured with all of the features needed to build a Kubernetes cluster and packaged as a single binary for ease of use. vCluster supports k0s as backing virtual Kubernetes cluster.
+[k0s](https://github.com/k0sproject/k0s) is an all-inclusive Kubernetes distribution, which is configured with all of the features needed to build a Kubernetes cluster and packaged as a single binary for ease of use. vCluster supports k0s as a backing virtual Kubernetes cluster.
-In order to use k0s as backing cluster, create a vCluster with the following command:
+In order to use k0s as a backing cluster, create a vCluster with the following command:
```
vcluster create my-vcluster --distro k0s
@@ -24,13 +24,13 @@ kubectl get ns
...
```
-Behind the scenes a different helm chart will be deployed (`vcluster-k0s`), that holds specific configuration to support k0s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k0s) for all available chart options.
+Behind the scenes, a different helm chart will be deployed (`vcluster-k0s`), that holds specific configurations to support k0s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k0s) for all available chart options.
## Vanilla k8s
-When choosing this option, vCluster will deploy a separate etcd cluster, kubernetes controller manager and api server alongside the vCluster hypervisor.
+When choosing this option, vCluster will deploy a separate etcd cluster, Kubernetes controller manager, and API server alongside the vCluster hypervisor.
-In order to use vanilla k8s as backing cluster, create a vCluster with the following command:
+In order to use vanilla k8s as a backing cluster, create a vCluster with the following command:
```
vcluster create my-vcluster --distro k8s
@@ -42,15 +42,15 @@ kubectl get ns
...
```
-Behind the scenes a different helm chart will be deployed (`vcluster-k8s`), that holds specific configuration to support vanilla k8s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k8s) for all available chart options.
+Behind the scenes, a different helm chart will be deployed (`vcluster-k8s`), that holds specific configuration to support vanilla k8s. Check the [GitHub repository](https://github.com/loft-sh/vcluster/tree/main/charts/k8s) for all available chart options.
## Other Distributions
vCluster has no dependencies on any specific Kubernetes distribution, so you should be able to run it with most certified Kubernetes distributions.
-One requirement vCluster has, is that the distribution can be deployed without a scheduler and kubelet, meaning that vCluster just requires the api server, controller manager and data storage of the distribution.
+One requirement vCluster has, is that the distribution can be deployed without a scheduler and kubelet, meaning that vCluster just requires the API server, controller manager and data storage of the distribution.
-For single binary distributions, such as k3s or k0s, extra bundled components can usually be disabled through flags, for multi binary distributions, such as vanilla k8s, you just need to deploy the virtual control plane with api server, controller manager and usually etcd.
-Most multi binary distributions work by just overriding the images of the k8s chart in a `values.yaml`, e.g.:
+For single binary distributions, such as k3s or k0s, extra bundled components can usually be disabled through flags, for multi binary distributions, such as vanilla k8s, you just need to deploy the virtual control plane with API server, controller manager and usually etcd.
+Most multi-binary distributions work by just overriding the images of the k8s chart in a `values.yaml`, e.g.:
```yaml
api:
@@ -67,12 +67,12 @@ And then deploy vCluster with:
vcluster create my-vcluster -n test --distro k8s -f values.yaml
```
-If you want to create a separate chart for the Kubernetes distribution, a good starting point is to copy one of [our distro charts](https://github.com/loft-sh/vcluster/tree/main/charts) and then modifying it to work with your distribution.
+If you want to create a separate chart for the Kubernetes distribution, a good starting point is to copy one of [our distro charts](https://github.com/loft-sh/vcluster/tree/main/charts) and then modify it to work with your distribution.
vCluster only needs the following information from the virtual Kubernetes distribution to function properly:
-1. The api server central authority certificate (usually found at `/pki/ca.crt`)
-2. The api server central authority key (usually found at `/pki/ca.key`)
+1. The API server central authority certificate (usually found at `/pki/ca.crt`)
+2. The API server central authority key (usually found at `/pki/ca.key`)
3. An admin kube config to contact the virtual Kubernetes control plane (usually found at `/pki/admin.conf`)
-For multi binary distributions, vCluster can even create those with a pre-install hook as found in the [k8s chart](https://github.com/loft-sh/vcluster/tree/main/charts/k8s/templates).
+For multi-binary distributions, vCluster can even create those with a pre-install hook as found in the [k8s chart](https://github.com/loft-sh/vcluster/tree/main/charts/k8s/templates).
-In general, if you need vCluster to support another Kubernetes distribution, we are always happy to help you or accept a pull request in our github repository.
+In general, if you need vCluster to support another Kubernetes distribution, we are always happy to help you or accept a pull request in our GitHub repository.
diff --git a/docs/pages/architecture/overview.mdx b/docs/pages/architecture/overview.mdx
index 4676e4035..8b6d22d25 100644
--- a/docs/pages/architecture/overview.mdx
+++ b/docs/pages/architecture/overview.mdx
@@ -41,14 +41,14 @@ vClusters should be as lightweight as possible to minimize resource overhead ins
**Implementation:** This is mainly achieved by bundling the vCluster inside a single Pod using k3s as a control plane.
### 2. No Performance Degradation
-Workloads running inside a vCluster (even inside [nested vClusters](#host-cluster--namespace)) should run with the same performance as workloads which are running directly on the underlying host cluster. The computing power, the access to underlying persistent storage as well as the network performance should not be degraded at all.
+Workloads running inside a vCluster (even inside [nested vClusters](#host-cluster--namespace)) should run with the same performance as workloads that are running directly on the underlying host cluster. The computing power, the access to underlying persistent storage as well as the network performance should not be degraded at all.
-**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster will be exactly the same in terms of computing power, storage access and networking.
+**Implementation:** This is mainly achieved by synchronizing pods which means that the pods are actually being scheduled and started just like regular pods of the underlying host cluster, i.e. if you run a pod inside the vCluster and you run the same pod directly on the host cluster will be exactly the same in terms of computing power, storage access, and networking.
### 3. Reduce Requests On Host Cluster
vClusters should greatly reduce the number of requests to the Kubernetes API server of the underlying [host cluster](#host-cluster--namespace) by ensuring that all high-level resources remain in the virtual cluster only without ever reaching the underlying host cluster.
-**Implementation:** This is mainly achieved by using a separate API server which handles all requests to the vCluster and a separate data store which stores all objects inside the vCluster. Only the syncer synchronizes very few low-level resources to the underlying cluster which requires very few API server requests. All of this happens in an asynchronous, non-blocking fashion (as pretty much everything in Kubernetes is desgined to be).
+**Implementation:** This is mainly achieved by using a separate API server that handles all requests to the vCluster and a separate data store that stores all objects inside the vCluster. Only the syncer synchronizes very few low-level resources to the underlying cluster which requires very few API server requests. All of this happens in an asynchronous, non-blocking fashion (as pretty much everything in Kubernetes is designed to be).
### 4. Flexible & Easy Provisioning
vCluster should not make any assumptions about how it is being provisioned. Users should be able to create vClusters on top of any Kubernetes cluster without requiring the installation of any server-side component to provision the vClusters, i.e. provisioning should be possible with any client-only deployment tool (vcluster CLI, helm, kubectl, kustomize, ...). An operator or CRDs may be added to manage vClusters (e.g. using Argo to provision vClusters) but a server-side management plane should never be required for spinning up a vCluster.
@@ -68,4 +68,4 @@ Each vCluster and all the workloads and data inside the vCluster should be encap
### 7. Easy Cleanup
vClusters should not have any hard wiring with the underlying cluster. Deleting a vCluster or merely deleting the vCluster's [host namespace](#host-cluster--namespace) should always be possible without any negative impacts on the underlying cluster (no namespaces stuck in terminating state or anything comparable) and should always guarantee that all vCluster-related resources are being deleted cleanly and immediately without leaving any orphan resources behind.
-**Implementation:** This is mainly achieved by not adding any control plane or server-side elements to the provisioning of vClusters. A vCluster is just a StatefulSet and few other Kubernetes resources. All synchronized resources in the host namespace have an appropriate owner reference, that means if you delete the vCluster itself, everything that belongs to the vCluster will be automatically deleted by Kubernetes as well (this is a similar mechanism as Deployments and StatefulSets use to clean up their Pods).
+**Implementation:** This is mainly achieved by not adding any control plane or server-side elements to the provisioning of vClusters. A vCluster is just a StatefulSet and few other Kubernetes resources. All synchronized resources in the host namespace have an appropriate owner reference, which means if you delete the vCluster itself, everything that belongs to the vCluster will be automatically deleted by Kubernetes as well (this is a similar mechanism as Deployments and StatefulSets use to clean up their Pods).
diff --git a/docs/pages/architecture/syncer/single_vs_multins.mdx b/docs/pages/architecture/syncer/single_vs_multins.mdx
index 65937fcc0..dcd54c929 100644
--- a/docs/pages/architecture/syncer/single_vs_multins.mdx
+++ b/docs/pages/architecture/syncer/single_vs_multins.mdx
@@ -8,7 +8,7 @@ sidebar_label: Single vs Multi-Namespace Sync
vcluster Multi-Namespace Architecture
-In this mode vCluster diverges from the [architecture described previously](../overview.mdx). By default, all namespaced resources that need to be synced to the host cluster are created in the namespace where vCluster is installed. But in multi-namespace mode vCluster will create a namespace in the host cluster for each namespace in the virtual cluster. The namespace name is modified to avoid conflicts between multiple vCluster instances in the same host, but the synced namespaced resources are created with the same name as in the virtual cluster. To enable this mode use the following helm value:
+In this mode, vCluster diverges from the [architecture described previously](../overview.mdx). By default, all namespaced resources that need to be synced to the host cluster are created in the namespace where vCluster is installed. But in multi-namespace mode, vCluster will create a namespace in the host cluster for each namespace in the virtual cluster. The namespace name is modified to avoid conflicts between multiple vCluster instances in the same host, but the synced namespaced resources are created with the same name as in the virtual cluster. To enable this mode use the following helm value:
```yaml
multiNamespaceMode:
diff --git a/docs/pages/deploying-vclusters/init-charts.mdx b/docs/pages/deploying-vclusters/init-charts.mdx
index 38022f9e7..3e073d58c 100644
--- a/docs/pages/deploying-vclusters/init-charts.mdx
+++ b/docs/pages/deploying-vclusters/init-charts.mdx
@@ -13,7 +13,7 @@ The `init.helm[].chart.version` scheme only supports absolute versions and not a
:::
### Upstream Mode
-This is the most straight forward approach of applying a helm chart existing in and public/private upstream chart repository. The following examples demonstrate the usage:
+This is the most straightforward approach of applying a helm chart existing in and public/private upstream chart repository. The following examples demonstrate the usage:
```
init:
helm:
@@ -47,7 +47,7 @@ If you're interested in applying a local chart directory, or a chart pulled from
cat my-chart.tar.gz | base64 | pbcopy
```
-Next we can paste the bundle in our values file:
+Next, we can paste the bundle in our values file:
```
init:
helm:
diff --git a/docs/pages/deploying-vclusters/integrations-openshift.mdx b/docs/pages/deploying-vclusters/integrations-openshift.mdx
index 077d69a85..953786b4b 100644
--- a/docs/pages/deploying-vclusters/integrations-openshift.mdx
+++ b/docs/pages/deploying-vclusters/integrations-openshift.mdx
@@ -7,7 +7,7 @@ import NonRootSegment from '../fragments/non-root-vcluster.mdx'
import OpenshiftSegment from '../fragments/deploy-to-openshift.mdx'
-By default, OpenShift doesn't allow running containers with the root user, but it assigns a random UID from the allowed range automatically, which means that you can skip the steps described in the [Running as non-root user](../security/rootless-mode.mdx) section of this document and your vCluster should run as non-root user by default.
+By default, OpenShift doesn't allow running containers with the root user, but it assigns a random UID from the allowed range automatically, which means that you can skip the steps described in the [Running as non-root user](../security/rootless-mode.mdx) section of this document and your vCluster should run as a non-root user by default.
OpenShift also imposes some restrictions that are not common to other Kubernetes distributions.
When deploying vCluster to OpenShift you will need to follow these additional steps:
@@ -16,6 +16,6 @@ When deploying vCluster to OpenShift you will need to follow these additional st
:::info Additional permission when running on OpenShift
vCluster requires `create` permission for the `endpoints/restricted` resource in the default group when running on OpenShift.
-This permission is required because OpenShift has additional built-in admission controller for the Endpoint resources, which denies creation of the endpoints pointing into the cluster network or service network CIDR ranges, unless this additional permission is given.
-Following the steps outline above ensures that the vCluster Role includes this permission, as it is necessary for certain networking features.
+This permission is required because OpenShift has an additional built-in admission controller for the Endpoint resources, which denies the creation of the endpoints pointing into the cluster network or service network CIDR ranges unless this additional permission is given.
+Following the steps outlined above ensures that the vCluster Role includes this permission, as it is necessary for certain networking features.
:::
diff --git a/docs/pages/deploying-vclusters/persistence.mdx b/docs/pages/deploying-vclusters/persistence.mdx
index 5f4ad08af..8aaa347ae 100644
--- a/docs/pages/deploying-vclusters/persistence.mdx
+++ b/docs/pages/deploying-vclusters/persistence.mdx
@@ -42,7 +42,7 @@ This method should only be used for testing purposes, as data will be lost upon
If you want to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `K3S_DATASTORE_ENDPOINT` environment variable of the vCluster container so that K3s knows how to connect to it. You may also specify environment variables to configure the authentication and encryption of the connection. The following environment variables are available:
-* **K3S_DATASTORE_ENDPOINT**: Specify a PostgresSQL, MySQL, or etcd connection string. This is a string used to describe the connection to the datastore. The structure of this string is specific to each backend and is detailed below.
+* **K3S_DATASTORE_ENDPOINT**: Specify a PostgreSQL, MySQL, or etcd connection string. This is a string used to describe the connection to the datastore. The structure of this string is specific to each backend and is detailed below.
* **K3S_DATASTORE_CAFILE**: TLS Certificate Authority (CA) file used to help secure communication with the datastore. If your datastore serves requests over TLS using a certificate signed by a custom certificate authority, you can specify that CA using this parameter so that the K3s client can properly verify the certificate.
* **K3S_DATASTORE_CERTFILE**: TLS certificate file used for client certificate based authentication to your datastore. To use this feature, your datastore must be configured to support client certificate based authentication. If you specify this parameter, you must also specify the `K3S_DATASTORE_KEYFILE` parameter.
* **K3S_DATASTORE_KEYFILE**: TLS key file used for client certificate based authentication to your datastore. See the previous `K3S_DATASTORE_CERTFILE` parameter for more details.
diff --git a/docs/pages/deploying-vclusters/supported-distros.mdx b/docs/pages/deploying-vclusters/supported-distros.mdx
index c307449f3..5ca70944b 100644
--- a/docs/pages/deploying-vclusters/supported-distros.mdx
+++ b/docs/pages/deploying-vclusters/supported-distros.mdx
@@ -3,7 +3,7 @@ title: Supported distributions
sidebar_label: Supported distributions
---
-By default, vCluster will use [k3s](https://github.com/k3s-io/k3s) as the virtual Kubernetes cluster. However, it is not tied to a specific distribution and should work with all certified Kubernetes distributions. By default, we recommend to use k3s, because it has a small footprint and widely adopted, but if your use case requires a different k8s distribution, vCluster currently also supports k0s or vanilla k8s. If that is also not enough, you can also add your custom Kubernetes distribution as outlined below.
+By default, vCluster will use [k3s](https://github.com/k3s-io/k3s) as the virtual Kubernetes cluster. However, it is not tied to a specific distribution and should work with all certified Kubernetes distributions. By default, we recommend using k3s, because it has a small footprint and is widely adopted, but if your use case requires a different k8s distribution, vCluster currently also supports k0s or vanilla k8s. If that is not enough, you can also add your custom Kubernetes distribution as outlined below.
## k3s
@@ -21,7 +21,7 @@ kubectl get ns
...
```
-Behind the scenes the default helm chart will be deployed, that holds specific configuration to support k3s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k3s) for all available chart options.
+Behind the scenes the default helm chart will be deployed, which holds a specific configuration to support k3s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k3s) for all available chart options.
## k0s
@@ -45,7 +45,7 @@ Behind the scenes a different helm chart will be deployed (`vcluster-k0s`), that
When choosing this option, vCluster will deploy a separate etcd cluster, kubernetes controller manager and api server alongside the vCluster hypervisor.
-In order to use vanilla k8s as backing cluster, create a vCluster with the following command:
+In order to use vanilla k8s as a backing cluster, create a vCluster with the following command:
```
vcluster create my-vcluster --distro k8s
@@ -57,7 +57,7 @@ kubectl get ns
...
```
-Behind the scenes a different helm chart will be deployed (`vcluster-k8s`), that holds specific configuration to support vanilla k8s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/k8s) for all available chart options.
+Behind the scenes, a different helm chart will be deployed (`vcluster-k8s`), that holds specific configuration to support vanilla k8s. Check the [GitHub repository](https://github.com/loft-sh/vcluster/tree/main/charts/k8s) for all available chart options.
## eks
@@ -76,7 +76,7 @@ kubectl get ns
...
```
-Behind the scenes a different helm chart will be deployed (`vcluster-eks`), that holds specific configuration to support vanilla k8s. Check the [github repository](https://github.com/loft-sh/vcluster/tree/main/charts/eks) for all available chart options.
+Behind the scenes, a different helm chart will be deployed (`vcluster-eks`), that holds a specific configuration to support vanilla k8s. Check the [GitHub repository](https://github.com/loft-sh/vcluster/tree/main/charts/eks) for all available chart options.
## Other Distributions
@@ -102,7 +102,7 @@ And then deploy vCluster with:
vcluster create my-vcluster -n test --distro k8s -f values.yaml
```
-If you want to create a separate chart for the Kubernetes distribution, a good starting point is to copy one of [our distro charts](https://github.com/loft-sh/vcluster/tree/main/charts) and then modifying it to work with your distribution.
+If you want to create a separate chart for the Kubernetes distribution, a good starting point is to copy one of [our distro charts](https://github.com/loft-sh/vcluster/tree/main/charts) and then modify it to work with your distribution.
vCluster only needs the following information from the virtual Kubernetes distribution to function properly:
1. The api server central authority certificate (usually found at `/pki/ca.crt`)
2. The api server central authority key (usually found at `/pki/ca.key`)
@@ -110,4 +110,4 @@ vCluster only needs the following information from the virtual Kubernetes distri
For multi binary distributions, vCluster can even create those with a pre-install hook as found in the [k8s chart](https://github.com/loft-sh/vcluster/tree/main/charts/k8s/templates).
-In general, if you need vCluster to support another Kubernetes distribution, we are always happy to help you or accept a pull request in our github repository.
+In general, if you need vCluster to support another Kubernetes distribution, we are always happy to help you or accept a pull request in our GitHub repository.
diff --git a/docs/pages/getting-started/cleanup.mdx b/docs/pages/getting-started/cleanup.mdx
index 9f1443e34..be070fc42 100644
--- a/docs/pages/getting-started/cleanup.mdx
+++ b/docs/pages/getting-started/cleanup.mdx
@@ -8,5 +8,5 @@ import DeleteFragment from '../fragments/delete-vcluster.mdx'
:::caution Resources inside vClusters
-Deleting a vCluster will also delete all objects within and all state related to the vCluster.
+Deleting a vCluster will also delete all objects within and all states related to the vCluster.
:::
diff --git a/docs/pages/help&tutorials/bootstrapping.mdx b/docs/pages/help&tutorials/bootstrapping.mdx
index 27a6a8b57..df8709393 100644
--- a/docs/pages/help&tutorials/bootstrapping.mdx
+++ b/docs/pages/help&tutorials/bootstrapping.mdx
@@ -86,7 +86,7 @@ done √ Switched active kube context to vcluster_init-tutorial_vcluster-init-tu
```
The exact output will depend a bit on how your host cluster is set up.
-If your host cluster is local (Docker Desktop, Minikube, etc.), vcluster will configure it to use a NodePort and connect to it automaticaly.
+If your host cluster is local (Docker Desktop, Minikube, etc.), vcluster will configure it to use a NodePort and connect to it automatically.
For remote clusters, vcluster will connect using port forwarding.
@@ -100,7 +100,7 @@ NAMESPACE NAME READY STATUS RESTARTS A
nginx nginx-deployment-775b6549b5-nr8jh 1/1 Running 0 68s
nginx nginx-deployment-775b6549b5-vcx7w 1/1 Running 0 68s
```
-In this example the two NGINX pods are running in the nginx namespace. The pods and the namespace were created automatically as soon as the vcluster was created.
+In this example, the two NGINX pods are running in the nginx namespace. The pods and the namespace were created automatically as soon as the vcluster was created.
Now let’s disconnect from the virtual cluster and delete it to clean up.
@@ -113,7 +113,7 @@ vcluster delete init-tutorial
```
# Apply a Helm chart on initialization
-Now let’s create a NGIX namespace and deployment again, but this time instead of creating the deployment manually, we’ll install NGINX with a public Helm chart.
+Now let’s create an NGINX namespace and deploy again, but this time instead of creating the deployment manually, we’ll install NGINX with a public Helm chart.
First, edit the values.yaml. Remove the previous contents and set them to:
```
diff --git a/docs/pages/networking/internal_traffic/vcluster_to_host.mdx b/docs/pages/networking/internal_traffic/vcluster_to_host.mdx
index c8e2f30d2..5c6e2def3 100644
--- a/docs/pages/networking/internal_traffic/vcluster_to_host.mdx
+++ b/docs/pages/networking/internal_traffic/vcluster_to_host.mdx
@@ -3,7 +3,7 @@ title: Map vCluster Service to Host Cluster Service
sidebar_label: From vCluster to Host
---
-It is also possible to map a virtual cluster service to an host cluster service. This is especially useful if you want to expose an application that runs inside the virtual cluster to other workloads running in the host cluster. This makes it also easier to share services across vCluster's.
+It is also possible to map a virtual cluster service to a host cluster service. This is especially useful if you want to expose an application that runs inside the virtual cluster to other workloads running in the host cluster. This makes it easier to share services across vCluster's.
For example, to map a virtual service `my-virtual-service` in the namespace `my-virtual-namespace` to the vCluster host namespace service `my-host-service`, you can use the following config in your `values.yaml`:
```yaml
@@ -13,4 +13,4 @@ mapServices:
to: my-host-service
```
-With this configuration, vCluster will manage a service called `my-host-service` inside the namespace where the vCluster workloads are synced to, which points to the virtual service `my-virtual-service` in namespace `my-virtual-namespace` inside the vCluster. So pods in the host cluster will be able to access the virtual service via e.g. `curl http://my-host-service`.
\ No newline at end of file
+With this configuration, vCluster will manage a service called `my-host-service` inside the namespace where the vCluster workloads are synced, which points to the virtual service `my-virtual-service` in namespace `my-virtual-namespace` inside the vCluster. So pods in the host cluster will be able to access the virtual service via e.g. `curl http://my-host-service`.
\ No newline at end of file
diff --git a/docs/pages/networking/network_policies.mdx b/docs/pages/networking/network_policies.mdx
index 61f9b2eb6..777676fcb 100644
--- a/docs/pages/networking/network_policies.mdx
+++ b/docs/pages/networking/network_policies.mdx
@@ -3,7 +3,7 @@ title: Network Policies
sidebar_label: Network Policies
---
-Kubernetes has a [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) resource type that allows creation of the rules that govern how pods communicate with each other.
+Kubernetes has a [Network Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) resource type that allows the creation of the rules that govern how pods communicate with each other.
By default, vCluster ignores these resources. However, once you enable synchronization of the Network Policies, vCluster will ensure correct policies are created in the host cluster to achieve the desired traffic behaviour.
diff --git a/docs/pages/o11y/logging/central_hpm.mdx b/docs/pages/o11y/logging/central_hpm.mdx
index 5cffff8d7..f8db177dd 100644
--- a/docs/pages/o11y/logging/central_hpm.mdx
+++ b/docs/pages/o11y/logging/central_hpm.mdx
@@ -3,8 +3,8 @@ title: Centralized Hostpath Mapper
sidebar_label: Centralized Hostpath Mapper
---
-This feature is an extention to the existing [hostpath mapper](./hpm.mdx) component of vCluster.
-Currently when enabled, hostpath mapper can support the following usecases:
+This feature is an extension to the existing [hostpath mapper](./hpm.mdx) component of vCluster.
+Currently, when enabled, hostpath mapper can support the following usecases:
1. [Enabling container based logging used by tools like fluentd, logstash etc.](./elk_stack.mdx) inside vCluster
2. [Enabling pod based logging used by loki](./grafana_loki.mdx) inside vCluster
3. [Velero restic backups] inside vCluster
diff --git a/docs/pages/o11y/logging/grafana_loki.mdx b/docs/pages/o11y/logging/grafana_loki.mdx
index 9f5df709c..aa35bd5cd 100644
--- a/docs/pages/o11y/logging/grafana_loki.mdx
+++ b/docs/pages/o11y/logging/grafana_loki.mdx
@@ -27,5 +27,5 @@ helm upgrade --install loki --namespace=monitoring grafana/loki-stack --create-n
1. Enter the loki endpoint in the `URL` field as `http://loki.monitoring:3100` or to the corresponding `.:` value according to your deployment, and click on "Save & test".
-1. Next click on "Explore" or navigate to [http://localhost:3000/explore](http://localhost:3000/explore) and select "Loki" from the dropdown menu. Select the desired Labels and Click on "Run query". Youre logs should now start appearing.
+1. Next click on "Explore" or navigate to [http://localhost:3000/explore](http://localhost:3000/explore) and select "Loki" from the dropdown menu. Select the desired Labels and Click on "Run query". Your logs should now start appearing.
\ No newline at end of file
diff --git a/docs/pages/o11y/logging/hpm.mdx b/docs/pages/o11y/logging/hpm.mdx
index a32638195..7ff7fc485 100644
--- a/docs/pages/o11y/logging/hpm.mdx
+++ b/docs/pages/o11y/logging/hpm.mdx
@@ -3,7 +3,7 @@ title: Enabling the HostPath Mapper
sidebar_label: HostPath Mapper
---
-Vcluster internal logging relies on separate component called the [Hostpath Mapper](https://github.com/loft-sh/vcluster-hostpath-mapper). This will make sure to resolve the correct virtual pod and container names to their physical counterparts.
+Vcluster internal logging relies on a separate component called the [Hostpath Mapper](https://github.com/loft-sh/vcluster-hostpath-mapper). This will make sure to resolve the correct virtual pod and container names to their physical counterparts.
To deploy this component, its basically a 2 step process
### Update the vCluster
You would want to create the vCluster with the following `values.yaml`:
diff --git a/docs/pages/o11y/metrics/metrics_server.mdx b/docs/pages/o11y/metrics/metrics_server.mdx
index 7722baeff..5a40eb8f1 100644
--- a/docs/pages/o11y/metrics/metrics_server.mdx
+++ b/docs/pages/o11y/metrics/metrics_server.mdx
@@ -5,7 +5,7 @@ sidebar_label: Metrics Server in vCluster
### Installing metrics server (inside vCluster)
-In case the above recommended method of getting metrics in vCluster using the metrics server proxy does not fulfil your requirements and you need a dedicated metrics server installation in the vCluster you can follow this section.
+In case the above recommended method of getting metrics in vCluster using the metrics server proxy does not fulfill your requirements and you need a dedicated metrics server installation in the vCluster you can follow this section.
Make sure the vCluster has access to the host clusters nodes. [Enabling real nodes synchronization](../../architecture/nodes.mdx) will create the required RBAC permissions.
Install the [metrics server](https://github.com/kubernetes-sigs/metrics-server#installation) via the official method into the vCluster.
@@ -18,7 +18,7 @@ kube-system coredns-854c77959c-q5878 3m 17Mi
kube-system metrics-server-5fbdc54f8c-fgrqk 0m 6Mi
```
-If you see below error after installing metrics-server (check [k3s#5334](https://github.com/k3s-io/k3s/issues/5344) for more information):
+If you see the below error after installing metrics-server (check [k3s#5334](https://github.com/k3s-io/k3s/issues/5344) for more information):
```
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503
@@ -43,4 +43,4 @@ kubectl patch deployment metrics-server --patch-file metrics_patch.yaml -n kube-
### How does it work?
-By default, vCluster will create a service for each node which redirects incoming traffic from within the vCluster to the node kubelet to vCluster itself. This means that if workloads within the vCluster try to scrape node metrics the traffic reaches vCluster first. Vcluster will redirect the incoming request to the host cluster and rewrite the response (pod names, pod namespaces etc) and return it to the requester.
+By default, vCluster will create a service for each node that redirects incoming traffic from within the vCluster to the node kubelet to vCluster itself. This means that if workloads within the vCluster try to scrape node metrics the traffic reaches vCluster first. Vcluster will redirect the incoming request to the host cluster, rewrite the response (pod names, pod namespaces etc) and return it to the requester.