Skip to content

Commit

Permalink
fixing the docs
Browse files Browse the repository at this point in the history
  • Loading branch information
facchettos committed Nov 30, 2023
1 parent 5ea3f01 commit 094e41e
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,31 +4,31 @@ sidebar_label: Load Tests
---

## Summary
This document includes performance test results of Kubernetes API using various vCluster and K8s distributions and configurations.
This document includes performance test results of Kubernetes API using various vCluster and K8s distributions and configurations.
This is a TL;DR of the test results, the detailed results can be found below. During our tests, K3s with SQLite lagged behind other distributions when running high intensity loads. However, for less intensive usage and a more simple deployment, it was only marginally slower than the others while staying well within the usable range.
If you plan on having high api usage in your vClusters, we recommend using an etcd backed distribution as you will most likely experience timeouts or throttling with the sqlite backed distribution. For less intense usage, K3s with SQLite will be as adequate as the others.


## API Response Times

<figure>
<img src="/docs/media/diagrams/apiserver-latency-baseline.svg" alt="apiserver-avg-baseline" />
<img src="/docs/media/apiserver-latency-baseline.svg" alt="apiserver-avg-baseline" />
<figcaption>APIserver average response time (baseline)</figcaption>
</figure>

During our baseline testing (300 secrets, 30qps), K3s with SQLite was significantly slower than the other distributions, with an average of 0.17s while the other distributions were all around 0.05s. This however should not have an impact since 0.17 is still a relatively good average.


<figure>
<img src="/docs/media/diagrams/apiserver-latency-intensive.svg" alt="apiserver-avg-intensive" />
<img src="/docs/media/apiserver-latency-intensive.svg" alt="apiserver-avg-intensive" />
<figcaption>APIserver average response time (intensive)</figcaption>
</figure>

For our more intensive testing (5000 secrets, 200qps), the differences between the distributions are more pronounced, where K3s with SQLite trailed behind with a 1.4s average response time while etcd K3s (vCluster.Pro distro) had an average response time of around 0.35s for both single node and HA setups. k0s and K8s were the fastest in these tests with an average of around 0.15s. Below is also the cumulative distribution of request times.


<figure>
<img src="/docs/media/diagrams/cumu-distribution-apiserver.svg" alt="apiserver-cumu-dist-intensive" />
<img src="/docs/media/cumu-distribution-apiserver.svg" alt="apiserver-cumu-dist-intensive" />
<figcaption>Cumulative distribution of request time during the intensive testing</figcaption>
</figure>

Expand All @@ -37,17 +37,17 @@ For our more intensive testing (5000 secrets, 200qps), the differences between t
During our testing, most distributions had similar CPU usage, with the exception of k3s with SQLite which had a higher CPU usage, most likely due to having to convert etcd requests into SQLite ones.

<figure>
<img src="/docs/media/diagrams/cpu-sn-baseline.svg" alt="cpu usage (baseline)" />
<img src="/docs/media/cpu-sn-baseline.svg" alt="cpu usage (baseline)" />
<figcaption>CPU usage during the baseline test</figcaption>
</figure>

<figure>
<img src="/docs/media/diagrams/cpu-sn-intensive.svg" alt="cpu usage (intensive)" />
<img src="/docs/media/cpu-sn-intensive.svg" alt="cpu usage (intensive)" />
<figcaption>CPU usage during the intensive test</figcaption>
</figure>

<figure>
<img src="/docs/media/diagrams/cpu-intensive-ha.svg" alt="cpu usage (intensive) for ha setups" />
<img src="/docs/media/cpu-intensive-ha.svg" alt="cpu usage (intensive) for ha setups" />
<figcaption>CPU usage during the intensive test (ha setups)</figcaption>
</figure>

Expand All @@ -56,17 +56,17 @@ During our testing, most distributions had similar CPU usage, with the exception
Memory usage was relatively similar in all setups

<figure>
<img src="/docs/media/diagrams/mem-usage-baseline.svg" alt="memory usage over time sn setup" />
<img src="/docs/media/mem-usage-baseline.svg" alt="memory usage over time sn setup" />
<figcaption>Memory usage during the baseline test</figcaption>
</figure>

<figure>
<img src="/docs/media/diagrams/mem-usage-intensive.svg" alt="memory usage over time sn setup" />
<img src="/docs/media/mem-usage-intensive.svg" alt="memory usage over time sn setup" />
<figcaption>Memory usage during the intensive test</figcaption>
</figure>

<figure>
<img src="/docs/media/diagrams/mem-usage-ha.svg" alt="memory usage over time sn setup" />
<img src="/docs/media/mem-usage-ha.svg" alt="memory usage over time sn setup" />
<figcaption>Memory usage during the intensive test with HA setups</figcaption>
</figure>

Expand All @@ -75,11 +75,11 @@ Memory usage was relatively similar in all setups
The filesystem usage was higher in the k3s SQLite version compared to all etcd backed versions in the intensive setup. In the baseline setup there was little to no usage of the filesystem

<figure>
<img src="/docs/media/diagrams/fs-write-intensive.svg" alt="fs usage over time" />
<img src="/docs/media/fs-write-intensive.svg" alt="fs usage over time" />
<figcaption>Filesystem writes over time</figcaption>
</figure>
<figure>
<img src="/docs/media/diagrams/fs-read-intensive.svg" alt="memory usage over time sn setup" />
<img src="/docs/media/fs-read-intensive.svg" alt="memory usage over time sn setup" />
<figcaption>Filesystem reads over time</figcaption>
</figure>

Expand Down
File renamed without changes.
9 changes: 9 additions & 0 deletions docs/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,15 @@ module.exports = {
"advanced-topics/plugins-development",
],
},
{
type: "category",
label: "Load Testing",
collapsed: true,
items: [
"advanced-topics/load-testing/setup",
"advanced-topics/load-testing/results",
],
},
"advanced-topics/telemetry",
],
},
Expand Down

0 comments on commit 094e41e

Please sign in to comment.