diff --git a/docs/pages/load-testing/results.mdx b/docs/pages/load-testing/results.mdx
new file mode 100644
index 0000000000..51da5e4912
--- /dev/null
+++ b/docs/pages/load-testing/results.mdx
@@ -0,0 +1,88 @@
+---
+title: Load Tests
+sidebar_label: Load Tests
+---
+
+## Summary
+This document includes performance test results of Kubernetes API using various vCluster and K8s distributions and configurations.
+This is a TL;DR of the test results, the detailed results can be found below. During our tests, K3s with SQLite lagged behind other distributions when running high intensity loads. However, for less intensive usage and a more simple deployment, it was only marginally slower than the others while staying well within the usable range.
+If you plan on having high api usage in your vClusters, we recommend using an etcd backed distribution as you will most likely experience timeouts or throttling with the sqlite backed distribution. For less intense usage, K3s with SQLite will be as adequate as the others.
+
+
+## API Response Times
+
+
+
+During our baseline testing (300 secrets, 30qps), K3s with SQLite was significantly slower than the other distributions, with an average of 0.17s while the other distributions were all around 0.05s. This however should not have an impact since 0.17 is still a relatively good average.
+
+
+
+
+For our more intensive testing (5000 secrets, 200qps), the differences between the distributions are more pronounced, where K3s with SQLite trailed behind with a 1.4s average response time while etcd K3s (vCluster.Pro distro) had an average response time of around 0.35s for both single node and HA setups. k0s and K8s were the fastest in these tests with an average of around 0.15s. Below is also the cumulative distribution of request times.
+
+
+
+
+## CPU usage
+
+During our testing, most distributions had similar CPU usage, with the exception of k3s with SQLite which had a higher CPU usage, most likely due to having to convert etcd requests into SQLite ones.
+
+
+
+
+
+
+
+## Memory usage
+
+Memory usage was relatively similar in all setups
+
+
+
+
+
+
+
+## Filesystem use
+
+The filesystem usage was higher in the k3s SQLite version compared to all etcd backed versions in the intensive setup. In the baseline setup there was little to no usage of the filesystem
+
+
+
+
+## Pod latency
+
+kube-burner calculates some statistics on pods, however it uses the status of the pods which only has a precision of seconds. With this level of precision, all distributions had similar p50, p99, average and max values for containerReady, Initialized, podScheduled and Ready.
diff --git a/docs/pages/load-testing/setup.mdx b/docs/pages/load-testing/setup.mdx
new file mode 100644
index 0000000000..1b9dcaeba9
--- /dev/null
+++ b/docs/pages/load-testing/setup.mdx
@@ -0,0 +1,12 @@
+---
+title: Test setup
+sidebar_label: Setup
+---
+
+Our testing has been done through kube-burner, with an EKS cluster as the host cluster, in the eu-west-3 region. All the configuration files are located [here](https://github.com/loft-sh/vcluster/load-test). You will need to change the default storage class from gp2 to gp3.
+
+To monitor the metrics, you should install the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) operator, and give it the permission to list the pods, services, endpoints and serviceMonitors by modifying the `prometheus-k8s`clusterrole in the namespace you will deploy your vClusters in (or in all namespaces for a faster edit).
+
+The APIs should be exposed (using the `--expose` vCluster option). You can either create the service monitor manually or use the Helm values to have vCluster create it for you. Make sure that Prometheus has done at least one scrape to your vCluster API before running kube-burner, as it would otherwise result in missing data for some metrics.
+
+To run the tests, run `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` to have the host cluster's Prometheus forwarded to your local machine, then `vcluster create --expose -f yourConfig yourCluster` to start your vCluster. Once everything is ready and Prometheus has detected your API servers, you will be able to run `kube-burner init --metrics metrics.yaml -c config.yaml -u http://localhost:9090`
diff --git a/docs/static/media/apiserver-latency-baseline.svg b/docs/static/media/apiserver-latency-baseline.svg
new file mode 100644
index 0000000000..b90ce2589c
--- /dev/null
+++ b/docs/static/media/apiserver-latency-baseline.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/apiserver-latency-intensive.svg b/docs/static/media/apiserver-latency-intensive.svg
new file mode 100644
index 0000000000..1199e10ace
--- /dev/null
+++ b/docs/static/media/apiserver-latency-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/baseline.svg b/docs/static/media/baseline.svg
new file mode 100644
index 0000000000..c7cfb993f0
--- /dev/null
+++ b/docs/static/media/baseline.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/cpu-intensive-ha.svg b/docs/static/media/cpu-intensive-ha.svg
new file mode 100644
index 0000000000..d57c20b77f
--- /dev/null
+++ b/docs/static/media/cpu-intensive-ha.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/cpu-sn-baseline.svg b/docs/static/media/cpu-sn-baseline.svg
new file mode 100644
index 0000000000..b383133519
--- /dev/null
+++ b/docs/static/media/cpu-sn-baseline.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/cpu-sn-intensive.svg b/docs/static/media/cpu-sn-intensive.svg
new file mode 100644
index 0000000000..53f0ff6e8d
--- /dev/null
+++ b/docs/static/media/cpu-sn-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/cumu-distribution-apiserver.svg b/docs/static/media/cumu-distribution-apiserver.svg
new file mode 100644
index 0000000000..ac18133a4a
--- /dev/null
+++ b/docs/static/media/cumu-distribution-apiserver.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/fs-read-intensive.svg b/docs/static/media/fs-read-intensive.svg
new file mode 100644
index 0000000000..440edd7b5a
--- /dev/null
+++ b/docs/static/media/fs-read-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/fs-write-intensive.svg b/docs/static/media/fs-write-intensive.svg
new file mode 100644
index 0000000000..531f7ff829
--- /dev/null
+++ b/docs/static/media/fs-write-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/intensive.svg b/docs/static/media/intensive.svg
new file mode 100644
index 0000000000..bbf7a538c0
--- /dev/null
+++ b/docs/static/media/intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/mem-usage-baseline.svg b/docs/static/media/mem-usage-baseline.svg
new file mode 100644
index 0000000000..8c82fdeb05
--- /dev/null
+++ b/docs/static/media/mem-usage-baseline.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/mem-usage-intensive.svg b/docs/static/media/mem-usage-intensive.svg
new file mode 100644
index 0000000000..55c640ec6c
--- /dev/null
+++ b/docs/static/media/mem-usage-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/mum-usage-ha.svg b/docs/static/media/mum-usage-ha.svg
new file mode 100644
index 0000000000..c3682db57e
--- /dev/null
+++ b/docs/static/media/mum-usage-ha.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/network-in-intensive.svg b/docs/static/media/network-in-intensive.svg
new file mode 100644
index 0000000000..3b557877f5
--- /dev/null
+++ b/docs/static/media/network-in-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/static/media/network-out-intensive.svg b/docs/static/media/network-out-intensive.svg
new file mode 100644
index 0000000000..8ab6ce8509
--- /dev/null
+++ b/docs/static/media/network-out-intensive.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/load-test/clusterconfig.yaml b/load-test/clusterconfig.yaml
new file mode 100644
index 0000000000..711bd881b9
--- /dev/null
+++ b/load-test/clusterconfig.yaml
@@ -0,0 +1,23 @@
+apiVersion: eksctl.io/v1alpha5
+kind: ClusterConfig
+metadata:
+ name: simple-cluster
+ region: eu-west-3
+
+nodeGroups:
+ - name: ng-1
+ instanceType: m5.large
+ desiredCapacity: 6
+ iam:
+ withAddonPolicies:
+ ebs: true
+iam:
+ withOIDC: true
+
+addons:
+- name: vpc-cni
+ attachPolicyARNs:
+ - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
+- name: aws-ebs-csi-driver
+ wellKnownPolicies: # add IAM and service account
+ ebsCSIController: true
diff --git a/load-test/gp3.yaml b/load-test/gp3.yaml
new file mode 100644
index 0000000000..f21b9af808
--- /dev/null
+++ b/load-test/gp3.yaml
@@ -0,0 +1,13 @@
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ creationTimestamp: "2023-11-15T09:35:19Z"
+ name: gp3
+parameters:
+ fsType: ext4
+ type: gp3
+provisioner: kubernetes.io/aws-ebs
+reclaimPolicy: Delete
+volumeBindingMode: WaitForFirstConsumer
diff --git a/load-test/ha-k8s.yaml b/load-test/ha-k8s.yaml
new file mode 100644
index 0000000000..a304cef44f
--- /dev/null
+++ b/load-test/ha-k8s.yaml
@@ -0,0 +1,23 @@
+
+# Enable HA mode
+enableHA: true
+
+# Scale up syncer replicas
+syncer:
+ replicas: 3
+
+# Scale up etcd
+etcd:
+ replicas: 3
+
+# Scale up controller manager
+controller:
+ replicas: 3
+
+# Scale up api server
+api:
+ replicas: 3
+
+# Scale up DNS server
+coredns:
+ replicas: 3
diff --git a/load-test/iam_policy.json b/load-test/iam_policy.json
new file mode 100644
index 0000000000..7944f2a128
--- /dev/null
+++ b/load-test/iam_policy.json
@@ -0,0 +1,241 @@
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "iam:CreateServiceLinkedRole"
+ ],
+ "Resource": "*",
+ "Condition": {
+ "StringEquals": {
+ "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:DescribeAccountAttributes",
+ "ec2:DescribeAddresses",
+ "ec2:DescribeAvailabilityZones",
+ "ec2:DescribeInternetGateways",
+ "ec2:DescribeVpcs",
+ "ec2:DescribeVpcPeeringConnections",
+ "ec2:DescribeSubnets",
+ "ec2:DescribeSecurityGroups",
+ "ec2:DescribeInstances",
+ "ec2:DescribeNetworkInterfaces",
+ "ec2:DescribeTags",
+ "ec2:GetCoipPoolUsage",
+ "ec2:DescribeCoipPools",
+ "elasticloadbalancing:DescribeLoadBalancers",
+ "elasticloadbalancing:DescribeLoadBalancerAttributes",
+ "elasticloadbalancing:DescribeListeners",
+ "elasticloadbalancing:DescribeListenerCertificates",
+ "elasticloadbalancing:DescribeSSLPolicies",
+ "elasticloadbalancing:DescribeRules",
+ "elasticloadbalancing:DescribeTargetGroups",
+ "elasticloadbalancing:DescribeTargetGroupAttributes",
+ "elasticloadbalancing:DescribeTargetHealth",
+ "elasticloadbalancing:DescribeTags"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "cognito-idp:DescribeUserPoolClient",
+ "acm:ListCertificates",
+ "acm:DescribeCertificate",
+ "iam:ListServerCertificates",
+ "iam:GetServerCertificate",
+ "waf-regional:GetWebACL",
+ "waf-regional:GetWebACLForResource",
+ "waf-regional:AssociateWebACL",
+ "waf-regional:DisassociateWebACL",
+ "wafv2:GetWebACL",
+ "wafv2:GetWebACLForResource",
+ "wafv2:AssociateWebACL",
+ "wafv2:DisassociateWebACL",
+ "shield:GetSubscriptionState",
+ "shield:DescribeProtection",
+ "shield:CreateProtection",
+ "shield:DeleteProtection"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:AuthorizeSecurityGroupIngress",
+ "ec2:RevokeSecurityGroupIngress"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:CreateSecurityGroup"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:CreateTags"
+ ],
+ "Resource": "arn:aws:ec2:*:*:security-group/*",
+ "Condition": {
+ "StringEquals": {
+ "ec2:CreateAction": "CreateSecurityGroup"
+ },
+ "Null": {
+ "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:CreateTags",
+ "ec2:DeleteTags"
+ ],
+ "Resource": "arn:aws:ec2:*:*:security-group/*",
+ "Condition": {
+ "Null": {
+ "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
+ "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "ec2:AuthorizeSecurityGroupIngress",
+ "ec2:RevokeSecurityGroupIngress",
+ "ec2:DeleteSecurityGroup"
+ ],
+ "Resource": "*",
+ "Condition": {
+ "Null": {
+ "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:CreateLoadBalancer",
+ "elasticloadbalancing:CreateTargetGroup"
+ ],
+ "Resource": "*",
+ "Condition": {
+ "Null": {
+ "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:CreateListener",
+ "elasticloadbalancing:DeleteListener",
+ "elasticloadbalancing:CreateRule",
+ "elasticloadbalancing:DeleteRule"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:AddTags",
+ "elasticloadbalancing:RemoveTags"
+ ],
+ "Resource": [
+ "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
+ "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
+ "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
+ ],
+ "Condition": {
+ "Null": {
+ "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
+ "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:AddTags",
+ "elasticloadbalancing:RemoveTags"
+ ],
+ "Resource": [
+ "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
+ "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
+ "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
+ "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
+ ]
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:ModifyLoadBalancerAttributes",
+ "elasticloadbalancing:SetIpAddressType",
+ "elasticloadbalancing:SetSecurityGroups",
+ "elasticloadbalancing:SetSubnets",
+ "elasticloadbalancing:DeleteLoadBalancer",
+ "elasticloadbalancing:ModifyTargetGroup",
+ "elasticloadbalancing:ModifyTargetGroupAttributes",
+ "elasticloadbalancing:DeleteTargetGroup"
+ ],
+ "Resource": "*",
+ "Condition": {
+ "Null": {
+ "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:AddTags"
+ ],
+ "Resource": [
+ "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
+ "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
+ "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
+ ],
+ "Condition": {
+ "StringEquals": {
+ "elasticloadbalancing:CreateAction": [
+ "CreateTargetGroup",
+ "CreateLoadBalancer"
+ ]
+ },
+ "Null": {
+ "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
+ }
+ }
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:RegisterTargets",
+ "elasticloadbalancing:DeregisterTargets"
+ ],
+ "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "elasticloadbalancing:SetWebAcl",
+ "elasticloadbalancing:ModifyListener",
+ "elasticloadbalancing:AddListenerCertificates",
+ "elasticloadbalancing:RemoveListenerCertificates",
+ "elasticloadbalancing:ModifyRule"
+ ],
+ "Resource": "*"
+ }
+ ]
+}
diff --git a/load-test/kubeburner/api-intensive/config.yaml b/load-test/kubeburner/api-intensive/config.yaml
new file mode 100644
index 0000000000..f8947a6185
--- /dev/null
+++ b/load-test/kubeburner/api-intensive/config.yaml
@@ -0,0 +1,24 @@
+---
+global:
+ indexerConfig:
+ type: local
+ measurements:
+ - name: podLatency
+jobs:
+ - name: api-intensive
+ jobIterations: 10
+ qps: 200
+ burst: 300
+ namespacedIterations: true
+ namespace: api-intensive
+ podWait: false
+ cleanup: true
+ jobPause: 1m
+ waitWhenFinished: true
+ objects:
+ - objectTemplate: templates/secret.yaml
+ replicas: 500
+ churnPercent: 10
+ churnDuration: 5m
+ churnDelay: 15s
+ churn: true
diff --git a/load-test/kubeburner/api-intensive/metrics.yaml b/load-test/kubeburner/api-intensive/metrics.yaml
new file mode 100644
index 0000000000..a7edd4282c
--- /dev/null
+++ b/load-test/kubeburner/api-intensive/metrics.yaml
@@ -0,0 +1,6 @@
+- query: apiserver_request_duration_seconds_bucket{resource="secrets",le=~"0.05|0.1|0.2|0.4|0.8|0.6|1|2|3|4|5|10",verb="POST"}
+ metricName: apiserver_latency_by_bucket
+ instant: true
+
+- query: container_memory_max_usage_bytes{namespace="vcluster-test",container="syncer"}
+ metricName: syncer-memoryusage
diff --git a/load-test/kubeburner/baseline/config.yaml b/load-test/kubeburner/baseline/config.yaml
new file mode 100644
index 0000000000..9b15733055
--- /dev/null
+++ b/load-test/kubeburner/baseline/config.yaml
@@ -0,0 +1,24 @@
+---
+global:
+ indexerConfig:
+ type: local
+ measurements:
+ - name: podLatency
+jobs:
+ - name: api-intensive
+ jobIterations: 10
+ qps: 30
+ burst: 30
+ namespacedIterations: true
+ namespace: api-intensive
+ podWait: false
+ cleanup: true
+ jobPause: 2m
+ waitWhenFinished: true
+ objects:
+ - objectTemplate: templates/secret.yaml
+ replicas: 30
+ churnPercent: 10
+ churnDuration: 5m
+ churnDelay: 15s
+ churn: true
diff --git a/load-test/kubeburner/metrics.yaml b/load-test/kubeburner/metrics.yaml
new file mode 100644
index 0000000000..178304d478
--- /dev/null
+++ b/load-test/kubeburner/metrics.yaml
@@ -0,0 +1,21 @@
+- query: apiserver_request_duration_seconds_bucket{resource="secrets",namespace="vcluster-test",le=~"0.05|0.1|0.2|0.4|0.8|0.6|1|2|3|4|5|10",verb="POST"}
+ metricName: apiserver_latency_by_bucket
+ instant: true
+
+- query: sum(container_memory_max_usage_bytes{namespace="vcluster-test",container=~"syncer|vcluster|etcd|kube-controller-manager|kube-apiserver"})
+ metricName: syncer-memoryusage
+
+- query: sum(container_cpu_usage_seconds_total{namespace="vcluster-test",container=~"syncer|vcluster|etcd|kube-controller-manager|kube-apiserver"})
+ metricName: cpu-usage
+
+- query: container_network_receive_bytes_total{namespace="vcluster-test",pod=~"test-.+"}
+ metricName: network-in
+
+- query: container_network_transmit_bytes_total{namespace="vcluster-test",pod=~"test-.+"}
+ metricName: network-out
+
+- query: sum(container_fs_reads_bytes_total{namespace="vcluster-test", container=~"syncer|vcluster|etcd|kube-controller-manager|kube-apiserver"})
+ metricName: fs-read
+
+- query: sum(container_fs_writes_bytes_total{namespace="vcluster-test", container=~"syncer|vcluster|etcd|kube-controller-manager|kube-apiserver"})
+ metricName: fs-write
diff --git a/load-test/kubeburner/pod-baseline/config.yaml b/load-test/kubeburner/pod-baseline/config.yaml
new file mode 100644
index 0000000000..88e89dca04
--- /dev/null
+++ b/load-test/kubeburner/pod-baseline/config.yaml
@@ -0,0 +1,24 @@
+---
+global:
+ indexerConfig:
+ type: local
+ measurements:
+ - name: podLatency
+jobs:
+ - name: api-intensive
+ jobIterations: 5
+ qps: 50
+ burst: 50
+ namespacedIterations: true
+ namespace: pod-testing
+ podWait: true
+ cleanup: true
+ jobPause: 1m
+ waitWhenFinished: true
+ objects:
+ - objectTemplate: templates/deployment.yaml
+ replicas: 20
+ churnPercent: 20
+ churnDuration: 5m
+ churnDelay: 15s
+ churn: true
diff --git a/load-test/kubeburner/pod-baseline/metrics.yaml b/load-test/kubeburner/pod-baseline/metrics.yaml
new file mode 100644
index 0000000000..1c27367604
--- /dev/null
+++ b/load-test/kubeburner/pod-baseline/metrics.yaml
@@ -0,0 +1,3 @@
+- query: apiserver_request_duration_seconds_bucket{resource="secrets",le=~"0.05|0.1|0.2|0.4|0.8|0.6|1|2|3|4|5|10",verb="POST"}
+ metricName: apiserver_latency_by_bucket
+ instant: true
diff --git a/load-test/kubeburner/pod-intensive/metrics.yaml b/load-test/kubeburner/pod-intensive/metrics.yaml
new file mode 100644
index 0000000000..1c27367604
--- /dev/null
+++ b/load-test/kubeburner/pod-intensive/metrics.yaml
@@ -0,0 +1,3 @@
+- query: apiserver_request_duration_seconds_bucket{resource="secrets",le=~"0.05|0.1|0.2|0.4|0.8|0.6|1|2|3|4|5|10",verb="POST"}
+ metricName: apiserver_latency_by_bucket
+ instant: true
diff --git a/load-test/kubeburner/pod-intensive/pod-intensive.yml b/load-test/kubeburner/pod-intensive/pod-intensive.yml
new file mode 100644
index 0000000000..bf90c5e7db
--- /dev/null
+++ b/load-test/kubeburner/pod-intensive/pod-intensive.yml
@@ -0,0 +1,24 @@
+---
+global:
+ indexerConfig:
+ type: local
+ measurements:
+ - name: podLatency
+jobs:
+ - name: api-intensive
+ jobIterations: 10
+ qps: 100
+ burst: 100
+ namespacedIterations: true
+ namespace: pod-testing
+ podWait: true
+ cleanup: true
+ jobPause: 1m
+ waitWhenFinished: true
+ objects:
+ - objectTemplate: templates/deployment.yaml
+ replicas: 100
+ churnPercent: 10
+ churnDuration: 5m
+ churnDelay: 15s
+ churn: true
diff --git a/load-test/kubeburner/templates/configmap.yaml b/load-test/kubeburner/templates/configmap.yaml
new file mode 100644
index 0000000000..9692bb17bf
--- /dev/null
+++ b/load-test/kubeburner/templates/configmap.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: configmap-{{.Replica}}
+data:
+ data.yaml: |-
+ a: 1
+ b: 2
+ c: 3
+
diff --git a/load-test/kubeburner/templates/deployment.yaml b/load-test/kubeburner/templates/deployment.yaml
new file mode 100644
index 0000000000..c536748936
--- /dev/null
+++ b/load-test/kubeburner/templates/deployment.yaml
@@ -0,0 +1,52 @@
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: api-intensive-{{.Replica}}
+ labels:
+ group: load
+ svc: api-intensive-{{.Replica}}
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ name: api-intensive-{{.Replica}}
+ template:
+ metadata:
+ labels:
+ group: load
+ name: api-intensive-{{.Replica}}
+ spec:
+ containers:
+ - image: registry.k8s.io/pause:3.1
+ name: api-intensive-{{.Replica}}
+ resources:
+ requests:
+ cpu: 10m
+ memory: 10M
+ volumeMounts:
+ - name: configmap
+ mountPath: /var/configmap
+ - name: secret
+ mountPath: /var/secret
+ dnsPolicy: Default
+ terminationGracePeriodSeconds: 1
+ # Add not-ready/unreachable tolerations for 15 minutes so that node
+ # failure doesn't trigger pod deletion.
+ tolerations:
+ - key: "node.kubernetes.io/not-ready"
+ operator: "Exists"
+ effect: "NoExecute"
+ tolerationSeconds: 900
+ - key: "node.kubernetes.io/unreachable"
+ operator: "Exists"
+ effect: "NoExecute"
+ tolerationSeconds: 900
+ volumes:
+ - name: configmap
+ configMap:
+ name: configmap-{{.Replica}}
+ - name: secret
+ secret:
+ secretName: secret-{{.Replica}}
+
diff --git a/load-test/kubeburner/templates/secret.yaml b/load-test/kubeburner/templates/secret.yaml
new file mode 100644
index 0000000000..18115492be
--- /dev/null
+++ b/load-test/kubeburner/templates/secret.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-{{.Replica}}
+type: Opaque
+data:
+ password: Zm9vb29vb29vb29vb29vbwo=
diff --git a/load-test/kubeburner/templates/service.yaml b/load-test/kubeburner/templates/service.yaml
new file mode 100644
index 0000000000..1160f626a8
--- /dev/null
+++ b/load-test/kubeburner/templates/service.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: service-{{.Replica}}
+spec:
+ selector:
+ name: api-intensive-{{.Replica}}
+ ports:
+ - port: 80
+ targetPort: 80
diff --git a/load-test/service-monitor.yaml b/load-test/service-monitor.yaml
new file mode 100644
index 0000000000..365ec5b2b7
--- /dev/null
+++ b/load-test/service-monitor.yaml
@@ -0,0 +1,29 @@
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: vcluster
+ namespace: vcluster-test
+spec:
+ selector:
+ matchLabels:
+ app: vcluster
+ namespaceSelector:
+ any: true
+ endpoints:
+ - interval: 30s
+ port: https
+ path: /metrics
+ scheme: https
+ tlsConfig:
+ ca:
+ secret:
+ name: vc-test
+ key: certificate-authority
+ cert:
+ secret:
+ name: vc-test
+ key: client-certificate
+ keySecret:
+ name: vc-test
+ key: client-key
+ serverName: 127.0.0.1
diff --git a/load-test/vcluster-k0s.yml b/load-test/vcluster-k0s.yml
new file mode 100644
index 0000000000..735f5d0f8d
--- /dev/null
+++ b/load-test/vcluster-k0s.yml
@@ -0,0 +1,15 @@
+proxy:
+ metricsServer:
+ nodes:
+ enabled: true
+ pods:
+ enabled: true
+sync:
+ nodes:
+ enabled: true
+ # If nodes sync is enabled, and syncAllNodes = true, the virtual cluster
+ # will sync all nodes instead of only the ones where some pods are running.
+ syncAllNodes: true
+monitoring:
+ serviceMonitor:
+ enabled: true
diff --git a/load-test/vcluster-k3s.yml b/load-test/vcluster-k3s.yml
new file mode 100644
index 0000000000..b525f6ad57
--- /dev/null
+++ b/load-test/vcluster-k3s.yml
@@ -0,0 +1,18 @@
+proxy:
+ metricsServer:
+ nodes:
+ enabled: true
+ pods:
+ enabled: true
+sync:
+ nodes:
+ enabled: true
+ # If nodes sync is enabled, and syncAllNodes = true, the virtual cluster
+ # will sync all nodes instead of only the ones where some pods are running.
+ syncAllNodes: true
+syncer:
+ extraArgs:
+ - --mount-physical-host-paths=true
+monitoring:
+ serviceMonitor:
+ enabled: true
diff --git a/load-test/vcluster-k8s-1.yml b/load-test/vcluster-k8s-1.yml
new file mode 100644
index 0000000000..0f28e83d27
--- /dev/null
+++ b/load-test/vcluster-k8s-1.yml
@@ -0,0 +1,16 @@
+replicas: 1
+proxy:
+ metricsServer:
+ nodes:
+ enabled: true
+ pods:
+ enabled: true
+sync:
+ nodes:
+ enabled: true
+ # If nodes sync is enabled, and syncAllNodes = true, the virtual cluster
+ # will sync all nodes instead of only the ones where some pods are running.
+ syncAllNodes: true
+monitoring:
+ serviceMonitor:
+ enabled: true
diff --git a/load-test/vcluster-k8s-3.yml b/load-test/vcluster-k8s-3.yml
new file mode 100644
index 0000000000..0c83463fe1
--- /dev/null
+++ b/load-test/vcluster-k8s-3.yml
@@ -0,0 +1,38 @@
+proxy:
+ metricsServer:
+ nodes:
+ enabled: true
+ pods:
+ enabled: true
+monitoring:
+ serviceMonitor:
+ enabled: true
+sync:
+ nodes:
+ enabled: true
+ # If nodes sync is enabled, and syncAllNodes = true, the virtual cluster
+ # will sync all nodes instead of only the ones where some pods are running.
+ syncAllNodes: true
+
+# Enable HA mode
+enableHA: true
+
+# Scale up syncer replicas
+syncer:
+ replicas: 3
+
+# Scale up etcd
+etcd:
+ replicas: 3
+
+# Scale up controller manager
+controller:
+ replicas: 3
+
+# Scale up api server
+api:
+ replicas: 3
+
+# Scale up DNS server
+coredns:
+ replicas: 3
diff --git a/load-test/vcluster-pro-k3s-1.yml b/load-test/vcluster-pro-k3s-1.yml
new file mode 100644
index 0000000000..48cbc9fd20
--- /dev/null
+++ b/load-test/vcluster-pro-k3s-1.yml
@@ -0,0 +1,9 @@
+embeddedEtcd:
+ enabled: true
+replicas: 1
+pro: true
+syncer:
+ image: facchettos/pro-test:latest
+monitoring:
+ serviceMonitor:
+ enabled: true
diff --git a/load-test/vcluster-pro-k3s-3.yml b/load-test/vcluster-pro-k3s-3.yml
new file mode 100644
index 0000000000..c0402b7d25
--- /dev/null
+++ b/load-test/vcluster-pro-k3s-3.yml
@@ -0,0 +1,9 @@
+embeddedEtcd:
+ enabled: true
+replicas: 3
+pro: true
+syncer:
+ image: facchettos/pro-test:latest
+monitoring:
+ serviceMonitor:
+ enabled: true