Skip to content

Commit

Permalink
TW and feedback edits
Browse files Browse the repository at this point in the history
  • Loading branch information
mesellings committed Oct 3, 2024
1 parent 64c190f commit 2a31ddb
Show file tree
Hide file tree
Showing 4 changed files with 49 additions and 43 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -139,18 +139,19 @@ First, calculate your requirements using the information provided above, taking
- Throughput: 20,000 process instances / day
- Disk space: 114 GB

Now you can select a hardware package that can cover these requirements. In this example this fits well into a cluster of size S.
Now you can select a hardware package that can cover these requirements. In this example this fits well into a cluster of size 2x.

### Camunda 8 SaaS

Camunda 8 defines three fixed hardware package sizes you can select from (1x, 2x, and 3x) when choosing your cluster [type](/components/concepts/clusters.md#cluster-type) and [size](/components/concepts/clusters.md#cluster-size). The following table gives you an indication of what requirements you can fulfill with each cluster size.
Camunda 8 defines three [cluster sizes](/components/concepts/clusters.md#cluster-size) you can select from (1x, 2x, and 3x) after you have chosen your [cluster type](/components/concepts/clusters.md#cluster-type). The following table gives you an indication of what requirements you can fulfill with each cluster size.

| Cluster size | 1x          | 2x          | 3x          |
| :---------------------------------------------------------------------------------- | :-------------------------------------------------- | :-------------------------------------------------- | :-------------------------------------------------- |
| Max Throughput **Tasks/day** **\*** | 4.3 M | 9.3 M | 13.8 M |
| Max Throughput **Tasks/second** **\*** | 50 | 108 | 160 |
| Max Throughput **Process Instances/day** **\*\*** | 3 M | 6 M | 9 M |
| Max Total Number of Process Instances stored (in Elasticsearch in total) **\*\*\*** | 75 k | 150 k | 225 k |
| Cluster size | 1x | 2x | 3x |
| :---------------------------------------------------------------------------------- | ---------------------------------: | ----------------------------------: | -------------------------------: |
| Max Throughput **Tasks/day** **\*** | 4.3 M | 9.3 M | 13.8 M |
| Max Throughput **Tasks/second** **\*** | 50 | 108 | 160 |
| Max Throughput **Process Instances/day** **\*\*** | 3 M | 6 M | 9 M |
| Max Total Number of Process Instances stored (in Elasticsearch in total) **\*\*\*** | 75 k | 150 k | 225 k |
| Approximate resources provisioned **\*\*\*\*** | 11 vCPU, 22 GB memory, 64 GB disk. | 22 vCPU, 44 GB memory, 128 GB disk. | 33 vCPU, 66 GB mem, 256 GB disk. |

The numbers in the table were measured using Camunda 8 (version 8.6), [the benchmark project](https://github.com/camunda-community-hub/camunda-8-benchmark) running on its own Kubernetes Cluster, and using a [realistic process](https://github.com/camunda/camunda/blob/main/zeebe/benchmarks/project/src/main/resources/bpmn/realistic/bankCustomerComplaintDisputeHandling.bpmn) containing a mix of BPMN symbols such as tasks, events and call activities including subprocesses. To calculate day-based metrics, an equal distribution over 24 hours is assumed.

Expand All @@ -162,6 +163,8 @@ The numbers in the table were measured using Camunda 8 (version 8.6), [the bench

Data Retention has an influence on the amount of data that is kept for completed instances in your cluster. The default Data retention is set to 30 days, which means that data that is older than 30 days gets removed from Operate and Tasklist. If a process instance is still active, it is fully functioning in runtime, but customers are not able to access historical data older than 30 days from Operate and Tasklist. Data retention is set to 6 months, meaning that data that is older than 6 months will be removed from Optimize. Up to certain limits Data Retention can be adjusted by Camunda on request. See [Camunda 8 SaaS data retention](/components/concepts/data-retention.md).

**\*\*\*\*** These are the resource limits configured in the Kubernetes cluster and are always subject to change.

:::note
Contact your Customer Success Manager if you require a custom cluster size above these requirements.
:::
Expand All @@ -172,7 +175,7 @@ You might wonder why the total number of process instances stored is that low. T

Provisioning Camunda 8 onto your Self-Managed Kubernetes cluster might depend on various factors. For example, most customers already have their own teams providing Elasticsearch for them as a service.

However, the following example shows a possible configuration which is close to a cluster of size S in Camunda 8 SaaS, which can serve as a starting point for your own sizing.
However, the following example shows a possible configuration which is close to a cluster of size 1x in Camunda 8 SaaS, which can serve as a starting point for your own sizing.

:::note
Such a cluster can serve roughly 65 tasks per second as a peak load, and it can store up to 100,000 process instances in Elasticsearch (in-flight and history) before running out of disk-space.
Expand Down
28 changes: 16 additions & 12 deletions docs/components/concepts/clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: "Learn more about the clusters available in your Camunda 8 plan."

A [cluster](../../guides/create-cluster.md) is a provided group of production-ready nodes that run Camunda 8.

When [creating a cluster](/components/console/manage-clusters/create-cluster.md), you can customize the cluster **type** and **size** to meet your organization's availability and scalability needs, and to provide control over cluster performance, uptime, and disaster recovery guarantees.
When [creating a cluster in SaaS](/components/console/manage-clusters/create-cluster.md), you can choose the cluster **type** and **size** to meet your organization's availability and scalability needs, and to provide control over cluster performance, uptime, and disaster recovery guarantees.

:::note

Expand All @@ -23,15 +23,17 @@ The cluster type defines the level of availability and uptime for the cluster.

You can choose from three different cluster types:

- Use a **Basic** cluster for experimentation, early development, and basic use cases that do not require a guaranteed high uptime.
- Use a **Standard** cluster for production-ready use cases, with a guaranteed higher uptime.
- Use an **Advanced** cluster for production, with guaranteed minimal disruption and the highest uptime.
- **Basic**: A cluster for non-production use, including experimentation, early development, and basic use cases that do not require a guaranteed high uptime.
- **Standard**: A production-ready cluster with guaranteed higher uptime.
- **Advanced**: A production-ready cluster with guaranteed minimal disruption and the highest uptime.

| Type | Basic | Standard | Advanced |
| :---------------------------------------------------------------------------- | :------------------------------------------------------- | :------------------------------------------- | :----------------------------------------------------- |
| Usage | Experimentation, early development, and basic use cases. | Production-ready use cases with high uptime. | Production with minimal disruption and highest uptime. |
| Uptime Percentage<br/> (Core Automation Cluster<strong>\*</strong>) | 99% | 99.5% | 99.9% |
| RTO/RPO<strong>\*\*</strong><br/>(Core Automation Cluster<strong>\*</strong>) | RTO: 8 hours<br/>RPO: 24 hours | RTO: 2 hours<br/>RPO: 4 hours | RTO: < 1 hour<br/>RPO: < 1 hour |
### Cluster availability and uptime

| Type | Basic | Standard | Advanced |
| :---------------------------------------------------------------------------- | :------------------------------------------------------------------------------------- | :-------------------------------------------------------- | :------------------------------------------------------------------------------------ |
| Usage | Non-production use, including experimentation, early development, and basic use cases. | Production-ready use cases with guaranteed higher uptime. | Production-ready use cases with guaranteed minimal disruption and the highest uptime. |
| Uptime Percentage<br/> (Core Automation Cluster<strong>\*</strong>) | 99% | 99.5% | 99.9% |
| RTO/RPO<strong>\*\*</strong><br/>(Core Automation Cluster<strong>\*</strong>) | RTO: 8 hours<br/>RPO: 24 hours | RTO: 2 hours<br/>RPO: 4 hours | RTO: < 1 hour<br/>RPO: < 1 hour |

<p><strong>* Core Automation Cluster</strong> means the components critical for automating processes and decisions, such as Zeebe, Operate, Tasklist, Optimize and Connectors.</p>
<p><strong>** RTO (Recovery Time Objective)</strong> means the maximum allowable time that a system or application can be down after a failure or disaster before it must be restored. It defines the target time to get the system back up and running. <strong>RPO (Recovery Point Objective)</strong> means the maximum acceptable amount of data loss measured in time. It indicates the point in time to which data must be restored to resume normal operations after a failure. It defines how much data you can afford to lose. The RTO/RPO figures shown in the table are provided on a best-effort basis and are not guaranteed.</p>
Expand All @@ -44,10 +46,12 @@ See [Camunda Enterprise General Terms](https://legal.camunda.com/licensing-and-o

The cluster size defines the cluster performance and capacity.

Choose the cluster size that best meets your cluster environment requirements. See [sizing your environment](/components/best-practices/architecture/sizing-your-environment.md#sizing-your-runtime-environment).
After you have chosen your cluster type, you can choose the cluster size that best meets your cluster environment requirements.

To learn more about choosing your cluster size, see [sizing your environment](/components/best-practices/architecture/sizing-your-environment.md#sizing-your-runtime-environment).

- You can choose from three cluster sizes: 1x, 2x, and 3x.
- Each increase in size boosts cluster performance and adds capacity. Larger cluster sizes allow you to serve more workload.
- Larger cluster sizes include increased performance and capacity, allowing you to serve more workload.
- Increased usage such as higher throughput or longer data retention requires a larger cluster size.
- Each size increase uses one of your available cluster reservations.

Expand All @@ -62,7 +66,7 @@ Contact your Customer Success Manager to:

## Free Trial clusters

Free Trial clusters have the same functionality as a production cluster, but are of a Basic type and 1x size, and only available during your trial period. You cannot convert a Free Trial cluster to a different kind of cluster.
Free Trial clusters have the same functionality as a production cluster, but are of a Basic type and 1x size, and only available during your trial period. You cannot convert a Free Trial cluster to a different type of cluster.

Once you sign up for a Free Trial, you are able to create one production cluster for the duration of your trial.

Expand Down
43 changes: 21 additions & 22 deletions docs/components/console/manage-clusters/create-cluster-include.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,44 @@
---
---

To deploy and run your process, you must create a cluster in Camunda 8.
To deploy and run your process, you must create a [cluster](/components/concepts/clusters.md) in Camunda 8.

1. To create a cluster, navigate to **Console**, click the **Clusters** tab, and click **Create new cluster**.
1. Name your cluster. For the purpose of this guide, we recommend using the **Stable** channel and the latest generation.
1. Select your [region](/docs/reference/regions.md).
1. Select your [encryption at rest protection level](/docs/components/concepts/encryption-at-rest.md) (enterprise only).
1. Name your cluster.
1. Select a [cluster type](/components/concepts/clusters.md#cluster-type) and [cluster size](/components/concepts/clusters.md#cluster-size).
1. Assign a cluster tag to indicate what type of cluster it is.
1. Select your [region](/reference/regions.md).
1. Select your [encryption at rest protection level](/components/concepts/encryption-at-rest.md) (enterprise only).
1. Select a channel and release. For the purpose of this guide, we recommend using the **Stable** channel and the latest generation.
1. Click **Create cluster**.
1. Your cluster will take a few moments to create. Check the status on the **Clusters** page or by clicking into the cluster itself and looking at the **Applications** section.

:::note

- If you haven't created a cluster yet, the **Clusters** page will be empty.
- Even while the cluster shows a status **Creating**, you can still proceed to begin modeling.
- You can start modeling even if the cluster shows a **Creating** status.

:::

![cluster-creating-modal](./img/cluster-creating-modal.png)

1. After creating the cluster, you can view the new entry in the **Clusters** tab:

![cluster-creating](./img/cluster-overview-new-cluster-creating.png)

2. The cluster is now being set up. During this phase, its state is **Creating**. After one or two minutes, the cluster is ready for use and changes its state to **Healthy**:

![cluster-healthy](./img/cluster-overview-new-cluster-healthy.png)

3. After the cluster is created, click on the cluster name to visit the cluster detail page.

## Development clusters

Starter Plan users have one **development cluster**, with free execution for development included in their plan.
Deployment and execution of models (process instances, decision instances, and task users) is provided at no cost.
Additional clusters can be purchased through your [billing reservations](/components/console/manage-plan/update-billing-reservations.md).

Visit the [clusters page](/components/concepts/clusters.md) to learn more about the differences between **development clusters** and **production clusters**.
To learn more about the differences between **development clusters** and **production clusters**, see [clusters](/components/concepts/clusters.md).

- **Stable**: Provides the latest feature and patch releases ready for most users at a minimal risk. The releases follow semantic versioning and can be updated to the next minor or patch release without data loss.
- **Alpha**: Provides preview releases in preparation for the next stable release. They provide a short-term stability point to test new features and give feedback before they are released to the stable channel. Try these to ensure the upcoming release works with your infrastructure. These releases cannot be updated to a newer release, and therefore are not meant to be used in production.
Expand All @@ -36,19 +51,3 @@ Additionally, you can tag your cluster for `dev`, `test`, `stage`, or `prod`. As
Only organization owners or users with the **Admin** role in Console can deploy from Web Modeler to `prod` clusters.
Users without **Admin** roles can deploy only on `dev`, `test`, or `stage` clusters.
:::

![cluster-creating-modal](./img/cluster-creating-modal.png)

1. After you've made your selection and created the cluster, view the new entry in the **Clusters** tab:

![cluster-creating](./img/cluster-overview-new-cluster-creating.png)

2. The cluster is now being set up. During this phase, its state is **Creating**. After one or two minutes, the cluster is ready for use and changes its state to **Healthy**:

![cluster-healthy](./img/cluster-overview-new-cluster-healthy.png)

3. After the cluster is created, click on the cluster name to visit the cluster detail page.

:::note
**Cluster auto-pause** is not yet available and only applies to non-Enterprise clusters. Development clusters will be paused if they go unused for two hours. When a cluster is paused, not all functionality is limited. For example, you may still execute BPMN timers and BPMN message catch events. To resume your cluster, review [how to resume a cluster](/components/console/manage-clusters/manage-cluster.md#resume-a-cluster).
:::
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 2a31ddb

Please sign in to comment.