Skip to content

Commit

Permalink
Merge pull request #2229 from loft-sh/thomaskosiewski/eng-4887-bug-en…
Browse files Browse the repository at this point in the history
…abling-volumesnapshots-sync-breaks-vcluster-v0210
  • Loading branch information
Thomas Kosiewski authored Oct 16, 2024
2 parents 16d2a59 + 00f3708 commit 1bad19d
Show file tree
Hide file tree
Showing 20 changed files with 101 additions and 71 deletions.
2 changes: 1 addition & 1 deletion .custom-gcl.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# This has to be >= v1.57.0 for module plugin system support.
version: v1.59.1
version: v1.61.0
plugins:
- module: "go.uber.org/nilaway"
import: "go.uber.org/nilaway/cmd/gclplugin"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/compatibility.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4
- name: Set up Go 1.21
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version-file: "go.mod"
Expand Down
5 changes: 2 additions & 3 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.22"
go-version-file: go.mod
- name: Setup Just
uses: extractions/setup-just@v2
- name: Setup Syft
Expand Down Expand Up @@ -75,7 +75,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.22"
go-version-file: go.mod
- name: Setup Just
uses: extractions/setup-just@v2
- name: Setup Syft
Expand Down Expand Up @@ -486,4 +486,3 @@ jobs:
echo "======================================================================================================================"
kubectl describe pods -n ${{ env.VCLUSTER_NAMESPACE }}
exit 1
2 changes: 1 addition & 1 deletion .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ jobs:
fi
- name: Install golangci-lint
run: go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.59.1
run: go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.61.0

- name: Build custom golangci-lint
run: golangci-lint custom
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jobs:
uses: actions/setup-go@v5
with:
cache: false
go-version: "1.22"
go-version-file: go.mod
- name: Setup Just
uses: extractions/setup-just@v2
- name: Setup Cosgin
Expand Down Expand Up @@ -59,7 +59,7 @@ jobs:
- uses: "goreleaser/goreleaser-action@v6"
with:
args: release --clean --timeout 60m
version: '~> v2'
version: "~> v2"
env:
GITHUB_TOKEN: ${{ secrets.GH_ACCESS_TOKEN }}
TELEMETRY_PRIVATE_KEY: ${{ secrets.VCLUSTER_TELEMETRY_PRIVATE_KEY }}
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/unit-tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,12 @@ jobs:
name: Execute all go tests
runs-on: ubuntu-22.04
steps:
- name: Set up Go 1.21
- name: Check out code into the Go module directory
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.22"
go-version-file: go.mod
cache: false
- name: Check out code into the Go module directory
uses: actions/checkout@v4
- name: Execute unit tests
run: ./hack/test.sh
47 changes: 23 additions & 24 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,17 +32,17 @@ There are a number of areas where contributions can be accepted:

We recommend developing vCluster directly on a local Kubernetes cluster as it provides faster feedback. There are two ways that we recommend developing.

* DevSpace
* Locally
- DevSpace
- Locally

## Pre-requisites for Development

### Tools

* Docker needs to be installed (e.g. docker-desktop, orbstack, rancher desktop etc.)
* [kubectl](https://kubernetes.io/docs/tasks/tools/)
* [Helm v3.10.0+](https://helm.sh/docs/intro/install/)
* Local Kubernetes v1.26+ cluster (i.e. Docker Desktop, [minikube](https://minikube.sigs.k8s.io/docs/start/), KinD or similar)
- Docker needs to be installed (e.g. docker-desktop, orbstack, rancher desktop etc.)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Helm v3.10.0+](https://helm.sh/docs/intro/install/)
- Local Kubernetes v1.26+ cluster (i.e. Docker Desktop, [minikube](https://minikube.sigs.k8s.io/docs/start/), KinD or similar)

### Fork and Clone the vcluster repo

Expand All @@ -69,13 +69,13 @@ Follow the guide on how to install [DevSpace](https://github.com/loft-sh/devspac
Ensure your `kubectl` is connected to the local Kubernetes cluster.

```
$ kubectl get namespaces
kubectl get namespaces
```

In your Github `vcluster` directory, run:

```
$ devspace dev
devspace dev
```

Which uses the `devspace.yaml` file in the `vcluster` directory to deploy a vCluster and launch DevSpace:
Expand Down Expand Up @@ -155,7 +155,6 @@ vcluster-0:vcluster-dev$ go run -mod vendor cmd/vcluster/main.go start

Now, you can start to work with the virtual cluster based on the source code. This vCluster is running on your local Kubernetes cluster.


If you change a file locally, DevSpace will automatically sync the file into the Devspace container. After any changes, re-run the same command in the DevSpace terminal to apply the changes.

#### Start vcluster in DevSpace in debug mode via `dlv`
Expand All @@ -165,7 +164,7 @@ You can either debug with Delve within DevSpace or locally. Devspace is more con
Run vCluster in the debug mode with Delve in the `vcluster` directory. Note: Other sessions of DevSpace will need to be terminated before starting another

```
$ devspace dev -n vcluster
devspace dev -n vcluster
```

Once DevSpace launches and you are in the `vcluster` pod, run the following delve command.
Expand All @@ -190,15 +189,15 @@ Download the [vCluster CLI](https://www.vcluster.com/docs/get-started/) and use
By connecting to the vCluster using the CLI, you set your local KubeConfig to the virtual cluster

```
$ vcluster connect vcluster
vcluster connect vcluster
```

## Build and Test the vcluster CLI tool

Build the CLI tool

```
$ go generate ./... && go build -o vcluster cmd/vclusterctl/main.go # build vcluster cli
go generate ./... && go build -o vcluster cmd/vclusterctl/main.go # build vcluster cli
```

Test the built CLI tool
Expand All @@ -208,12 +207,13 @@ Test the built CLI tool
```

## Developing without DevSpace

### Pre-requisites

* [Golang v1.22](https://go.dev/doc/install)
* [Goreleaser](https://goreleaser.com/install/)
* [Just](https://github.com/casey/just)
* [Kind](https://kind.sigs.k8s.io/)
- [Golang v1.22](https://go.dev/doc/install)
- [Goreleaser](https://goreleaser.com/install/)
- [Just](https://github.com/casey/just)
- [Kind](https://kind.sigs.k8s.io/)

### Uninstall vCluster CLI

Expand Down Expand Up @@ -266,6 +266,7 @@ You can now use your cluster with:
kubectl cluster-info --context kind-kind
```

### Build vCluster Container Image

```
Expand All @@ -279,7 +280,7 @@ Note: Feel free to push this image into your own registry.
If using kind as your local Kubernetes cluster, you need to import the image into kind.

```
$ kind load docker-image my-vcluster:0.0.1
kind load docker-image my-vcluster:0.0.1
```

### Create vCluster with self-compiled vCluster CLI
Expand All @@ -305,15 +306,15 @@ controlPlane:
Launch your vCluster using your `vcluster.yaml`

```
$ ./dist/<ARCH>/vcluster create my-vcluster -n my-vcluster -f ./vcluster.yaml --local-chart-dir chart
./dist/<ARCH>/vcluster create my-vcluster -n my-vcluster -f ./vcluster.yaml --local-chart-dir chart
```
### Access your vCluster and Set your local KubeConfig
By connecting to the vCluster using the CLI, you set your local KubeConfig to the virtual cluster
```
$ ./dist/<ARCH>/vcluster connect my-vcluster
./dist/<ARCH>/vcluster connect my-vcluster
```
# Running vCluster Tests
Expand All @@ -325,16 +326,16 @@ All of the tests are located in the vcluster directory.
Run the entire unit test suite.
```
$ ./hack/test.sh
./hack/test.sh
```
## Running the e2e Test Suite
Run the e2e tests, that are located in the e2e folder.
```
$ just delete-kind
$ just e2e
just delete-kind
just e2e

```
Expand All @@ -344,12 +345,10 @@ If [Ginkgo](https://github.com/onsi/ginkgo#global-installation) is already insta
For running conformance tests, please take a look at [conformance tests](https://github.com/loft-sh/tree/vcluster/main/conformance/v1.21)
# License
This project is licensed under the Apache 2.0 License.
# Copyright notice
It is important to state that you retain copyright for your contributions, but agree to license them for usage by the project and author(s) under the Apache 2.0 license. Git retains history of authorship, but we use a catch-all statement rather than individual names.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ ARG KINE_VERSION="v0.13.1"
FROM rancher/kine:${KINE_VERSION} as kine

# Build program
FROM golang:1.22 as builder
FROM golang:1.23 as builder

WORKDIR /vcluster-dev
ARG TARGETOS
Expand Down
20 changes: 13 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,9 @@

[![Join us on Slack!](docs/static/media/slack.svg)](https://slack.loft.sh/) [![Open in DevPod!](https://devpod.sh/assets/open-in-devpod.svg)](https://devpod.sh/open#https://github.com/loft-sh/vcluster)



Virtual clusters are fully functional Kubernetes clusters nested inside a physical host cluster providing better isolation and flexibility to support multi-tenancy. Multiple teams can operate independently within the same physical infrastructure while minimizing conflicts, maximizing autonomy, and reducing costs.

Virtual clusters run inside host cluster namespaces but function as separate Kubernetes clusters, with their own API server, control plane, syncer, and set of resources. While virtual clusters share the physical resources of the host cluster (such as CPU, memory, and storage), they manage their resources independently, allowing for efficient utilization and scaling.
Virtual clusters run inside host cluster namespaces but function as separate Kubernetes clusters, with their own API server, control plane, syncer, and set of resources. While virtual clusters share the physical resources of the host cluster (such as CPU, memory, and storage), they manage their resources independently, allowing for efficient utilization and scaling.

Virtual clusters interact with the host cluster for resource scheduling and networking but maintain a level of abstraction to ensure operations within a virtual cluster don't directly affect the host cluster's global state.

Expand All @@ -25,41 +23,50 @@ Virtual clusters interact with the host cluster for resource scheduling and netw
<br>

## Benefits

Virtual clusters provide immense benefits for large-scale Kubernetes deployments and multi-tenancy.

<img src="docs/static/media//diagrams/vcluster-comparison.png" width="500">

### Robust security and isolation

- **Granular Permissions:** vCluster users operate with minimized permissions in the host cluster, significantly reducing the risk of privileged access misuse. Within their vCluster, users have admin-level control, enabling them to manage CRDs, RBAC, and other security policies independently.
- **Isolated Control Plane:** Each vCluster comes with its own dedicated API server and control plane, creating a strong isolation boundary.
- **Customizable Security Policies:** Tenants can implement additional vCluster-specific governance, including OPA policies, network policies, resource quotas, limit ranges, and admission control, in addition to the existing policies and security measures in the underlying physical host cluster.
- **Enhanced Data Protection:** With options for separate backing stores, including embedded SQLite, etcd, or external databases, virtual clusters allow for isolated data management, reducing the risk of data leakage between tenants.

### Access for tenants

- **Full Admin Access per Tenant:** Tenants can freely deploy CRDs, create namespaces, taint, and label nodes, and manage cluster-scoped resources typically restricted in standard Kubernetes namespaces.
- **Isolated yet Integrated Networking:** While ensuring automatic isolation (for example, pods in different virtual clusters cannot communicate by default), vCluster allows for configurable network policies and service sharing, supporting both separation and sharing as needed.
- **Node Management:** Assign static nodes to specific virtual clusters or share node pools among multiple virtual clusters, providing flexibility in resource allocation.

### Cost-effectiveness and reduced overhead

- **Lightweight Infrastructure:** Virtual clusters are significantly more lightweight than physical clusters, able to spin up in seconds, which contrasts sharply with the lengthy provisioning times often seen in environments like EKS (~45 minutes).
- **Resource Efficiency:** By sharing the underlying host cluster's resources, virtual clusters minimize the need for additional physical infrastructure, reducing costs and environmental impact.
- **Simplified Management:** The vCluster control plane, running inside a single pod, along with optional integrated CoreDNS, minimizes the operational overhead, making virtual clusters especially suitable for large-scale deployments and multi-tenancy scenarios.
- **Simplified Management:** The vCluster control plane, running inside a single pod, along with optional integrated CoreDNS, minimizes the operational overhead, making virtual clusters especially suitable for large-scale deployments and multi-tenancy scenarios.

### Enhanced flexibility and compatibility

- **Diverse Kubernetes Environments:** vCluster supports different Kubernetes versions and distributions (including K8s, K3s, and K0s), allowing version skews. This makes it possible to tailor each virtual cluster to specific requirements without impacting others.
- **Adaptable Backing Stores:** Choose from a range of data stores, from lightweight (SQLite) to enterprise-grade options (embedded etcd, external data stores like Global RDS), catering to various scalability and durability needs.
- **Runs Anywhere:** Virtual clusters can run on EKS, GKE, AKS, OpenShift, RKE, K3s, cloud, edge, and on-prem. As long as it's a K8s cluster, you can run a virtual cluster on top of it.

### Improved scalability

- **Reduced API Server Load:** Virtual clusters, each with their own dedicated API server, significantly reduce the operational load on the host cluster's Kubernetes API server by isolating and handling requests internally.
- **Conflict-Free CRD Management:** Independent management of CRDs within each virtual cluster eliminates the potential for CRD conflicts and version discrepancies, ensuring smoother operations and easier scaling as the user base expands.

## Common use cases

### Pre-production
- **Empower developers with self-service Kubernetes:** Simplify Kubernetes access for developers through self-service virtual clusters, reducing human error and enhancing developer autonomy without compromising security and compliance requirements.

- **Empower developers with self-service Kubernetes:** Simplify Kubernetes access for developers through self-service virtual clusters, reducing human error and enhancing developer autonomy without compromising security and compliance requirements.
- **Accelerate CI/CD with ephemeral Kubernetes clusters:** Instantly create clean, new virtual Kubernetes clusters for each pull request, enabling fast, isolated testing and PR previews without wait times and the struggles of a shared test environment.

### Production

- **Elevate your ISV offering with a dedicated cluster per customer:** Host each customer in a virtual cluster with strict tenant isolation and seamless scalability, while consolidating essential tools into a unified platform stack serving multiple tenants.
- **Build a managed Kubernetes service with best-in-class COGS and high margins:** Enable direct customer access to dedicated virtual Kubernetes clusters, streamlining node and resource allocation for industry-leading efficiency and unparalleled scalability.

Expand All @@ -72,11 +79,10 @@ Refer to our [quick start guide](https://www.vcluster.com/docs/vcluster/) to dep
Thank you for your interest in contributing! Please refer to
[CONTRIBUTING.md](https://github.com/loft-sh/vcluster/blob/main/CONTRIBUTING.md) for guidance.


## License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0
<http://www.apache.org/licenses/LICENSE-2.0>

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module github.com/loft-sh/vcluster

go 1.22.4
go 1.23.2

require (
github.com/blang/semver v3.5.1+incompatible
Expand Down
26 changes: 18 additions & 8 deletions pkg/config/validation.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,7 @@ var allowedPodSecurityStandards = map[string]bool{
"restricted": true,
}

var (
verbs = []string{"get", "list", "create", "update", "patch", "watch", "delete", "deletecollection"}
)
var verbs = []string{"get", "list", "create", "update", "patch", "watch", "delete", "deletecollection"}

func ValidateConfigAndSetDefaults(config *VirtualClusterConfig) error {
// check the value of pod security standard
Expand Down Expand Up @@ -61,12 +59,24 @@ func ValidateConfigAndSetDefaults(config *VirtualClusterConfig) error {

// check if nodes controller needs to be enabled
if config.ControlPlane.Advanced.VirtualScheduler.Enabled && !config.Sync.FromHost.Nodes.Enabled {
return fmt.Errorf("sync.fromHost.nodes.enabled is false, but required if using virtual scheduler")
return errors.New("sync.fromHost.nodes.enabled is false, but required if using virtual scheduler")
}

// check if storage classes and host storage classes are enabled at the same time
if config.Sync.FromHost.StorageClasses.Enabled == "true" && config.Sync.ToHost.StorageClasses.Enabled {
return fmt.Errorf("you cannot enable both sync.fromHost.storageClasses.enabled and sync.toHost.storageClasses.enabled at the same time. Choose only one of them")
return errors.New("you cannot enable both sync.fromHost.storageClasses.enabled and sync.toHost.storageClasses.enabled at the same time. Choose only one of them")
}

if config.Sync.FromHost.PriorityClasses.Enabled && config.Sync.ToHost.PriorityClasses.Enabled {
return errors.New("cannot sync priorityclasses to and from host at the same time")
}

// volumesnapshots and volumesnapshotcontents are dependant on each other
if config.Sync.ToHost.VolumeSnapshotContents.Enabled && !config.Sync.ToHost.VolumeSnapshots.Enabled {
return errors.New("when syncing volume snapshots contents to the host, one must set sync.toHost.volumeSnapshots.enabled to true")
}
if config.Sync.ToHost.VolumeSnapshots.Enabled && !config.Sync.ToHost.VolumeSnapshotContents.Enabled {
return errors.New("when syncing volume snapshots to the host, one must set sync.toHost.volumeSnapshotContents.enabled to true")
}

// validate central admission control
Expand Down Expand Up @@ -122,13 +132,13 @@ func ValidateConfigAndSetDefaults(config *VirtualClusterConfig) error {

func validateDistro(config *VirtualClusterConfig) error {
enabledDistros := 0
if config.Config.ControlPlane.Distro.K3S.Enabled {
if config.ControlPlane.Distro.K3S.Enabled {
enabledDistros++
}
if config.Config.ControlPlane.Distro.K0S.Enabled {
if config.ControlPlane.Distro.K0S.Enabled {
enabledDistros++
}
if config.Config.ControlPlane.Distro.K8S.Enabled {
if config.ControlPlane.Distro.K8S.Enabled {
enabledDistros++
}

Expand Down
Loading

0 comments on commit 1bad19d

Please sign in to comment.