diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/assets/core-dns-kube-dns.svg b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/assets/core-dns-kube-dns.svg new file mode 100644 index 00000000000..8f203888688 --- /dev/null +++ b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/assets/core-dns-kube-dns.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/dual-region.md b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/dual-region.md new file mode 100644 index 00000000000..64685c706e2 --- /dev/null +++ b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/dual-region.md @@ -0,0 +1,605 @@ +--- +id: dual-region +title: "Dual-region setup" +description: "Deploy two Amazon Kubernetes (EKS) clusters with Terraform for a peered setup allowing dual-region communication." +--- + +import CoreDNSKubeDNS from "./assets/core-dns-kube-dns.svg" + +:::warning +Review our [dual-region concept documentation](#) before continuing to understand the current limitations and restrictions of this setup, as well as the disclaimer concerning support from Camunda. +::: + +This guide offers a detailed tutorial for deploying two Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) clusters, tailored explicitly for deploying Camunda 8 and using Terraform, a popular Infrastructure as Code (IaC) tool. + +:::note +This guide requires you to have previously completed or reviewed the steps taken in [deploying an EKS cluster with Terraform](./terraform-setup.md). If you have no experience with Terraform and Amazon EKS, review this content for the essentials of setting up an Amazon EKS cluster and configuring AWS IAM permissions. This content explains the process of using Terraform with AWS, making it accessible even to those new to Terraform or IaC concepts. +::: + +## Prerequisites + +- An [AWS account](https://docs.aws.amazon.com/accounts/latest/reference/accounts-welcome.html) to create resources within AWS. +- [Terraform (1.7.x)](https://developer.hashicorp.com/terraform/downloads) +- [Kubectl (1.28.x)](https://kubernetes.io/docs/tasks/tools/#kubectl) to interact with the cluster. + +## Considerations + +This setup provides an essential foundation for setting up Camunda 8 in a dual-region setup. Though it's not tailored for optimal performance, it's a good initial step for preparing a production environment by incorporating [IaC tooling](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/infrastructure-as-code). + +To try out Camunda 8 or develop against it, consider signing up for our [SaaS offering](https://camunda.com/platform/). If you already have two Amazon EKS clusters (peered together) and an S3 bucket, consider skipping to [deploy Camunda 8 to the clusters](#deploy-camunda-8-to-the-clusters). + +For the simplicity of this guide, certain best practices will be provided with links to additional resources, enabling you to explore the topic in more detail. + +:::warning +Following this guide will incur costs on your Cloud provider account, namely for the managed Kubernetes service, running Kubernetes nodes in EC2, Elastic Block Storage (EBS), traffic between regions, and S3. More information can be found on [AWS](https://aws.amazon.com/eks/pricing/) and their [pricing calculator](https://calculator.aws/#/) as the total cost varies per region. +::: + +## Outcome + +Completion of this tutorial will result in: + +- Two Amazon EKS Kubernetes clusters in two different geographic regions with each four nodes ready for the Camunda 8 dual-region installation. +- The [EBS CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html) installed and configured, which is used by the Camunda 8 Helm chart to create [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). +- A [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) between the two EKS clusters, allowing cross-cluster communication between different regions. +- An [Amazon Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) (S3) bucket for [Elasticsearch backups](https://www.elastic.co/guide/en/elasticsearch/reference/current/repository-s3.html). + +## Environment Prerequisites + +There are two regions (`REGION_0` and `REGION_1`), each with its own Kubernetes cluster (`CLUSTER_0` and `CLUSTER_1`). + +To streamline the execution of the subsequent commands, it is recommended to export multiple environment variables within your terminal. +Additionally, it is recommended to manifest those changes for future interactions with the dual-region setup. + +1. Git clone or fork the repository [c8-multi-region](https://github.com/camunda/c8-multi-region) + +```bash +git clone https://github.com/camunda/c8-multi-region.git +``` + +2. The cloned repository and folder `aws/dual-region/scripts/` provides a helper script [export_environment_preqrequisites.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/export_environment_preqrequisites.sh) to export various environment variables to ease the interaction with a dual-region setup. Consider permanently changing this file for future interactions. +3. You must adjust these environment variable values within the script to your needs. + +:::warning + +You have to choose unique namespaces for Camunda 8 installations. The namespace for Camunda 8 installation in the cluster of region 0 (`CAMUNDA_NAMESPACE_0`), needs to have a different name from the namespace for Camunda 8 installation in the cluster of region 1 (`CAMUNDA_NAMESPACE_1`). This is required for proper traffic routing between the clusters. + +For example, you can install Camunda 8 into the `CAMUNDA_NAMESPACE_0` namespace in the `CLUSTER_0` cluster, and `CAMUNDA_NAMESPACE_1` namespace on the `CLUSTER_1` cluster, where `CAMUNDA_NAMESPACE_0` != `CAMUNDA_NAMESPACE_1`. +Using the same namespace names on both clusters won't work as CoreDNS won't be able to distinguish between traffic targeted at the local and remote cluster. + +In addition to namespaces for Camunda installations, you need to create the namespaces for failover (`CAMUNDA_NAMESPACE_0_FAILOVER` in `CLUSTER_0` and `CAMUNDA_NAMESPACE_1_FAILOVER` in `CLUSTER_1`), for the case of a total region loss. This is for completeness, so you don't forget to add the mapping on region recovery. The operational procedure is handled in a different [document](#). + +::: + +4. Execute the script via the following command: + +```bash +. ./export_environment_preqrequisites.sh +``` + +The dot is required to export those variables to your shell and not a spawned subshell. + +```bash reference +https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/export_environment_preqrequisites.sh +``` + +## Installing Amazon EKS clusters with Terraform + +### Prerequisites + +1. From your cloned repository, navigate to `aws/dual-region/terraform`. This contains the Terraform base configuration for the dual-region setup. + +### Contents elaboration + +#### config.tf + +This file contains the [backend](https://developer.hashicorp.com/terraform/language/settings/backends/configuration) and [provider](https://developer.hashicorp.com/terraform/language/providers/configuration) configuration. + +Meaning where to store the [Terraform state](https://developer.hashicorp.com/terraform/language/state) and which providers to use, their versions, and potential credentials. + +The important part of `config.tf` is the initialization of two AWS providers, as you need one per region and this is a limitation by AWS given everything is scoped to a region. + +:::note + +It's recommended to use a different backend than `local`. Find more information in the [Terraform documentation](https://developer.hashicorp.com/terraform/language/settings/backends/configuration). + +::: + +:::warning + +Do not store sensitive information (credentials) in your Terraform files. + +::: + +#### clusters.tf + +This file is using [Terraform modules](https://developer.hashicorp.com/terraform/language/modules), which allows abstracting resources into reusable components. + +The [Camunda provided module](https://github.com/camunda/camunda-tf-eks-module) is publicly available. It's advisable to review this module before usage. + +There are various other input options to customize the cluster setup further. See the [module documentation](https://github.com/camunda/camunda-tf-eks-module) for additional details. + +It contains the declaration of the two clusters. One of them has an explicit provider declaration as otherwise everything is deployed to the default AWS provider, which is limited to a single region. + +#### vpc-peering.tf + +For a multi-region setup, you need to have the [virtual private cloud (VPC)](https://aws.amazon.com/vpc/) peered to route traffic between regions using private IPv4 addresses and not publicly route the traffic and expose it. For more information, review the [AWS documentation on VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html). + +VPC peering is preferred over [transit gateways](https://aws.amazon.com/transit-gateway/). VPC peering has no bandwidth limit and a lower latency than a transit gateway. For a complete comparison, review the [AWS documentation](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/transit-vpc-solution.html#peering-vs). + +The previously mentioned [Camunda module](https://github.com/camunda/camunda-tf-eks-module) will automatically create a VPC per cluster. + +This file covers the VPC peering between the two VPCs and allow any traffic between those two by adjusting each cluster's security groups. + +#### s3.tf + +For Elasticsearch, an S3 bucket is required to allow [creating and restoring snapshots](https://www.elastic.co/guide/en/elasticsearch/reference/current/repository-s3.html). There are [alternative ways](https://www.elastic.co/guide/en/elasticsearch/reference/current/snapshot-restore.html) but since this is focused on AWS, it makes sense to remain within the same cloud environment. + +This file covers the declaration of an S3 bucket to use for the backups. Additionally, a service account with access to use within the Kubernetes cluster to configure Elasticsearch to access the S3 bucket. + +#### output.tf + +[Terraform outputs](https://developer.hashicorp.com/terraform/language/values/outputs) allow you to reuse generated values in future steps. For example, the access keys of the service account with S3 access. + +#### variables.tf + +This file contains various variable definitions for both [local](https://developer.hashicorp.com/terraform/language/values/locals) and [input](https://developer.hashicorp.com/terraform/language/values/variables) types. The difference is that input variables require you to define the value on execution. While local variables are permanently defined, they are namely for code duplication purposes and readability. + +### Preparation + +1. Adjust any values in the [variables.tf](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/terraform/variables.tf) to your liking. For example, the target regions and their name or CIDR blocks of each cluster. +2. Make sure that any adjustments are reflected in your [environment prerequisites](#environment-prerequisites) to ease the [in-cluster setup](#in-cluster-setup). +3. Set up the authentication for the `AWS` provider. + +:::note + +The [AWS Terraform provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) is required to create resources in AWS. You must configure the provider with the proper credentials before using it. You can further change the region and other preferences and explore different [authentication](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration) methods. + +There are several ways to authenticate the `AWS` provider. + +- (Recommended) Use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to configure access. Terraform will automatically default to AWS CLI configuration when present. +- Set environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, which can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). + +::: + +### Execution + +:::warning + +A user who creates resources in AWS will therefore own these resources. In this particular case, the user will always have admin access to the Kubernetes cluster until the cluster is deleted. + +Therefore, it can make sense to create an extra [AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) which credentials are used for Terraform purposes. + +::: + +1. Open a terminal and navigate to `aws/dual-region/terraform`. +2. Initialize the working directory: + +```hcl +terraform init -upgrade +``` + +3. Apply the configuration files: + +```hcl +terraform apply +``` + +If you have not set a default value for `cluster_name`, you will be asked to provide a suitable name. + +4. After reviewing the plan, you can type `yes` to confirm and apply the changes. + +At this point, Terraform will create the Amazon EKS clusters with all the necessary configurations. The completion of this process may require approximately 20-30 minutes. + +## In-cluster setup + +Now that you have created two peered Kubernetes clusters with Terraform, you will still have to configure various things to make the dual-region work. + +### Cluster access + +To ease working with two clusters, create or update your local `kubeconfig` to contain those new contexts. Using an alias for those new clusters allows you to directly use kubectl and Helm with a particular cluster. + +Update or create your kubeconfig via the [AWS CLI](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html): + +```bash +# the alias allows for easier context switching in kubectl +aws eks --region $REGION_0 update-kubeconfig --name $CLUSTER_0 --alias $CLUSTER_0 +aws eks --region $REGION_1 update-kubeconfig --name $CLUSTER_1 --alias $CLUSTER_1 +``` + +The region and name must align with the values you have defined in Terraform. + +### DNS chaining + +This allows for easier communication between the two clusters by forwarding DNS queries from the region 0 cluster to the region 1 cluster and vice versa. + + + +You are configuring the CoreDNS from the cluster in **Region 0** to resolve certain namespaces via **Region 1** instead of using the in-cluster DNS server. Camunda applications (e.g. Zeebe brokers) to resolve DNS record names of Camunda applications running in another cluster. + +#### CoreDNS configuration + +1. Expose `kube-dns`, the in-cluster DNS resolver via an internal load-balancer in each cluster. + +```bash +kubectl --context $CLUSTER_0 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml +kubectl --context $CLUSTER_1 apply -f https://raw.githubusercontent.com/camunda/c8-multi-region/main/aws/dual-region/kubernetes/internal-dns-lb.yml +``` + +2. Execute the script [generate_core_dns_entry.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_core_dns_entry.sh) in the folder `aws/dual-region/scripts/` of the repository to help you generate the CoreDNS config. Make sure that you have previously exported the [environment prerequisites](#environment-prerequisites) since the script builds on top of it. + +```bash +./generate_core_dns_entry.sh +``` + +3. The script will retrieve the IPs of the load balancer via the AWS CLI and return the required config change. +4. As the script suggests, copy the statement between the placeholders to edit the CoreDNS configmap in cluster 0 and cluster 1, depending on the placeholder. + +
+ Example output + + +:::danger +For illustration purposes. These values will not work in your environment! +::: + +```bash +./generate_core_dns_entry.sh +Please copy the following between +### Cluster 0 - Start ### and ### Cluster 0 - End ### +and insert it at the end of your CoreDNS configmap in Cluster 0 + +kubectl --context cluster-london -n kube-system edit configmap coredns + +### Cluster 0 - Start ### + camunda-paris.svc.cluster.local:53 { + errors + cache 30 + forward . 10.202.19.54 10.202.53.21 10.202.84.222 { + force_tcp + } + } + camunda-paris-failover.svc.cluster.local:53 { + errors + cache 30 + forward . 10.202.19.54 10.202.53.21 10.202.84.222 { + force_tcp + } + } +### Cluster 0 - End ### + +Please copy the following between +### Cluster 1 - Start ### and ### Cluster 1 - End ### +and insert it at the end of your CoreDNS configmap in Cluster 1 + +kubectl --context cluster-paris -n kube-system edit configmap coredns + +### Cluster 1 - Start ### + camunda-london.svc.cluster.local:53 { + errors + cache 30 + forward . 10.192.27.56 10.192.84.117 10.192.36.238 { + force_tcp + } + } + camunda-london-failover.svc.cluster.local:53 { + errors + cache 30 + forward . 10.192.27.56 10.192.84.117 10.192.36.238 { + force_tcp + } + } +### Cluster 1 - End ### +``` + + +
+ +
+ Full configmap example + + +:::danger + +For illustration purposes. This file will not work in your environment! + +::: + +```yaml title="coredns-cm-london.yml" +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + eks.amazonaws.com/component: coredns + k8s-app: kube-dns + name: coredns + namespace: kube-system +data: + Corefile: | + .:53 { + errors + health { + lameduck 5s + } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + } + prometheus :9153 + forward . /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } + camunda-paris.svc.cluster.local:53 { + errors + cache 30 + forward . 10.202.19.54 10.202.53.21 10.202.84.222 { + force_tcp + } + } + camunda-paris-failover.svc.cluster.local:53 { + errors + cache 30 + forward . 10.202.19.54 10.202.53.21 10.202.84.222 { + force_tcp + } + } +``` + + +
+ +5. Check that CoreDNS has reloaded for the changes to take effect before continuing. Make sure it contains `Reloading complete`: + +```bash +kubectl --context $CLUSTER_0 logs -f deployment/coredns -n kube-system +kubectl --context $CLUSTER_1 logs -f deployment/coredns -n kube-system +``` + +### Test DNS chaining + +The script [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/test_dns_chaining.sh) within the folder `aws/dual-region/scripts/` of the repository will help to test that the DNS chaining is working by using nginx pods and services to ping each other. + +1. Execute the [test_dns_chaining.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/test_dns_chaining.sh). Make sure that you have previously exported the [environment prerequisites](#environment-prerequisites) since the script builds on top of it. + +```bash +./test_dns_chaining.sh +``` + +2. Watch how a nginx pod and service will be deployed per cluster. It will wait till the pods are ready and finally ping from nginx in cluster 0 the nginx in cluster 1 and vice versa. If it fails to contact the other nginx 5 times, it will fail. + +## Deploy Camunda 8 to the clusters + +### Create the secret for Elasticsearch + +Elasticsearch will need an S3 bucket for data backup and restore procedure, required during a regional failover. For this, you will need to configure a Kubernetes secret to not expose those in cleartext: + +You can pull the data from Terraform since you exposed those via the `output.tf`. + +1. From the Terraform code location `aws/dual-region/terraform`, execute the following to export the access keys to environment variables. This will allow an easier creation of the Kubernetes secret via the command line: + +```bash +export AWS_ACCESS_KEY_ES=$(terraform output -raw s3_aws_access_key) +export AWS_SECRET_ACCESS_KEY_ES=$(terraform output -raw s3_aws_secret_access_key) +``` + +2. From the folder `aws/dual-region/scripts` of the repository execute the script [create_elasticsearch_secrets.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/create_elasticsearch_secrets.sh). It will use the exported environment variables from step 1 to create the required secret within the Camunda namespaces. Those have previously been defined and exported via the [environment prerequisites](#environment-prerequisites). + +```bash +./create_elasticsearch_secrets.sh +``` + +3. Unset environment variables to reduce the risk of potential exposure. The script is spawned in a subshell and can't modify the environment variables without extra workarounds. + +```bash +unset AWS_ACCESS_KEY_ES +unset AWS_SECRET_ACCESS_KEY_ES +``` + +### Camunda 8 Helm chart prerequisites + +Within the cloned repository, navigate to `aws/dual-region/kubernetes`. This contains a dual-region example setup. + +#### Content Elaboration + +Our approach is to work with layered helm values files: + +- Have a base `camunda-values.yml` that is generally applicable for both Camunda installations +- Two overlays that are for region 0 and region 1 installations + +##### camunda-values.yml + +This forms the base layer that contains the basic required setup, which applies to both regions. + +Key changes of the dual-region setup: + +- `global.multiregion.regions: 2` + - indicates the use for two regions +- `global.identity.auth.enabled: false` + - Identity is currently not supported. Please see the [limitations section](#) on the dual-region concept page. . +- `global.elasticsearch.disableExporter: true` + - disables the automatic Elasticsearch configuration of the helm chart. We will manually supply the values via environment variables. +- `identity.enabled: false` + - Identity is currently not supported. +- `optimize.enabled: false` + - Optimize is currently not supported. It has a dependency on Identity. +- `zeebe.env` + - `ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS` + - These are the contact points for the brokers to know how to form the cluster. Find more information on what the variable means in [setting up a cluster](../../../../zeebe-deployment/operations/setting-up-a-cluster.md). + - `ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION0_ARGS_URL` + - The Elasticsearch endpoint for region 0. + - `ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION1_ARGS_URL` + - The Elasticsearch endpoint for region 1. +- A cluster of 8 Zeebe brokers (4 in each of the regions) is recommended for the dual-region setup + - `zeebe.clusterSize: 8` + - `zeebe.partitionCount: 8` + - `zeebe.replicationFactor: 4` +- `elasticsearch.initScripts` + - configures the S3 bucket access via a predefined Kubernetes secret + +##### region0/camunda-values.yml + +This overlay contains the multi-region identification for the cluster in region 0. + +##### region1/camunda-values.yml + +This overlay contains the multi-region identification for the cluster in region 1. + +#### Preparation + +:::warning +You must change the following environment variables for Zeebe. The default values will not work for you and are just for illustration. +::: + +The base `camunda-values.yml`, in `aws/dual-region/kubernetes` requires adjustments before installing the Helm chart. + +- `ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS` +- `ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION0_ARGS_URL` +- `ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION1_ARGS_URL` + +1. The bash script [generate_zeebe_helm_values.sh](https://github.com/camunda/c8-multi-region/blob/main/aws/dual-region/scripts/generate_zeebe_helm_values.sh) in the repository folder `aws/dual-region/scripts/` helps generate those values. You only have to copy and replace them within the base `camunda-values.yml`. It will use the exported environment variables of the [environment prerequisites](#environment-prerequisites) for namespaces and regions. + +```bash +./generate_zeebe_helm_values.sh + +# It will ask you to provide the following values +# Enter Helm release name used for installing Camunda 8 in both Kubernetes clusters: +## the way you'll call the Helm release, for example camunda +# Enter Zeebe cluster size (total number of Zeebe brokers in both Kubernetes clusters): +## for a dual-region setup we recommend 8. Resulting in 4 brokers per region. +``` + +
+ Example output + + +:::danger +For illustration purposes. These values will not work in your environment! +::: + +```bash +./generate_zeebe_helm_values.sh +Enter Helm release name used for installing Camunda 8 in both Kubernetes clusters: camunda +Enter Zeebe cluster size (total number of Zeebe brokers in both Kubernetes clusters): 8 + +Please use the following to set the environment variable ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS in the base Camunda Helm chart values file for Zeebe. + +- name: ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS + value: camunda-zeebe-0.camunda-zeebe.camunda-london.svc.cluster.local:26502,camunda-zeebe-0.camunda-zeebe.camunda-paris.svc.cluster.local:26502,camunda-zeebe-1.camunda-zeebe.camunda-london.svc.cluster.local:26502,camunda-zeebe-1.camunda-zeebe.camunda-paris.svc.cluster.local:26502,camunda-zeebe-2.camunda-zeebe.camunda-london.svc.cluster.local:26502,camunda-zeebe-2.camunda-zeebe.camunda-paris.svc.cluster.local:26502,camunda-zeebe-3.camunda-zeebe.camunda-london.svc.cluster.local:26502,camunda-zeebe-3.camunda-zeebe.camunda-paris.svc.cluster.local:26502 + +Please use the following to set the environment variable ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION0_ARGS_URL in the base Camunda Helm chart values file for Zeebe. + +- name: ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION0_ARGS_URL + value: http://camunda-elasticsearch-master-hl.camunda-london.svc.cluster.local:9200 + +Please use the following to set the environment variable ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION1_ARGS_URL in the base Camunda Helm chart values file for Zeebe. + +- name: ZEEBE_BROKER_EXPORTERS_ELASTICSEARCHREGION1_ARGS_URL + value: http://camunda-elasticsearch-master-hl.camunda-paris.svc.cluster.local:9200 +``` + + +
+ +2. As the script suggests, replace the environment variables within the `camunda-values.yml`. + +### Deploy Camunda 8 + +1. From the terminal context of `aws/dual-region/kubernetes` execute: + +```bash +helm install camunda camunda/camunda-platform \ + --version 9.3.1 \ + --kube-context $CLUSTER_0 \ + --namespace $CAMUNDA_NAMESPACE_0 \ + -f camunda-values.yml \ + -f region0/camunda-values.yml + +helm install camunda camunda/camunda-platform \ + --version 9.3.1 \ + --kube-context $CLUSTER_1 \ + --namespace $CAMUNDA_NAMESPACE_1 \ + -f camunda-values.yml \ + -f region1/camunda-values.yml +``` + +### Verify Camunda 8 + +1. Open a terminal and port-forward the Zeebe Gateway via `kubectl` from one of your clusters. Zeebe is stretching over both clusters and is `active-active`, meaning it doesn't matter which Zeebe Gateway to use to interact with your Zeebe cluster. + +```bash +kubectl --context "$CLUSTER_0" port-forward services/camunda-zeebe-gateway 26500:26500 +``` + +2. Open another terminal and use [zbctl](../../../../../apis-tools/cli-client/cli-get-started.md) to print the Zeebe cluster status. + +```bash +zbctl status --insecure --address localhost:26500 +``` + +3. Make sure that your output contains all 8 brokers from the two regions. + +
+ Example output + + +```bash +Cluster size: 8 +Partitions count: 8 +Replication factor: 4 +Gateway version: 8.5.0 +Brokers: + Broker 0 - camunda-zeebe-0.camunda-zeebe.camunda-london.svc:26501 + Version: 8.5.0 + Partition 1 : Follower, Healthy + Partition 6 : Follower, Healthy + Partition 7 : Follower, Healthy + Partition 8 : Follower, Healthy + Broker 1 - camunda-zeebe-0.camunda-zeebe.camunda-paris.svc:26501 + Version: 8.5.0 + Partition 1 : Follower, Healthy + Partition 2 : Leader, Healthy + Partition 7 : Follower, Healthy + Partition 8 : Follower, Healthy + Broker 2 - camunda-zeebe-1.camunda-zeebe.camunda-london.svc:26501 + Version: 8.5.0 + Partition 1 : Leader, Healthy + Partition 2 : Follower, Healthy + Partition 3 : Leader, Healthy + Partition 8 : Follower, Healthy + Broker 3 - camunda-zeebe-1.camunda-zeebe.camunda-paris.svc:26501 + Version: 8.5.0 + Partition 1 : Follower, Healthy + Partition 2 : Follower, Healthy + Partition 3 : Follower, Healthy + Partition 4 : Leader, Healthy + Broker 4 - camunda-zeebe-2.camunda-zeebe.camunda-london.svc:26501 + Version: 8.5.0 + Partition 2 : Follower, Healthy + Partition 3 : Follower, Healthy + Partition 4 : Follower, Healthy + Partition 5 : Leader, Healthy + Broker 5 - camunda-zeebe-2.camunda-zeebe.camunda-paris.svc:26501 + Version: 8.5.0 + Partition 3 : Follower, Healthy + Partition 4 : Follower, Healthy + Partition 5 : Follower, Healthy + Partition 6 : Follower, Healthy + Broker 6 - camunda-zeebe-3.camunda-zeebe.camunda-london.svc:26501 + Version: 8.5.0 + Partition 4 : Follower, Healthy + Partition 5 : Follower, Healthy + Partition 6 : Leader, Healthy + Partition 7 : Leader, Healthy + Broker 7 - camunda-zeebe-3.camunda-zeebe.camunda-paris.svc:26501 + Version: 8.5.0 + Partition 5 : Follower, Healthy + Partition 6 : Follower, Healthy + Partition 7 : Follower, Healthy + Partition 8 : Leader, Healthy +``` + + +
diff --git a/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md index ca0b24e99f2..a634ea2f9c1 100644 --- a/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md +++ b/docs/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md @@ -86,19 +86,21 @@ The [AWS Terraform provider](https://registry.terraform.io/providers/hashicorp/a There are several ways to authenticate the `AWS` provider. - (Recommended) Use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to configure access. Terraform will automatically default to AWS CLI configuration when present. -- Set environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, where the `key` and `id` can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). +- Set environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, which can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). ::: :::warning -Do not use secrets in your configuration files. +Do not store sensitive information (credentials) in your Terraform files. ::: :::warning -The user who creates the resources will always be the owner. This means the user will always have admin access to the Kubernetes cluster until you delete it. Therefore, it can make sense to create an extra [AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) that's solely used for Terraform purposes. +A user who creates resources in AWS will therefore own these resources. In this particular case, the user will always have admin access to the Kubernetes cluster until the cluster is deleted. + +Therefore, it can make sense to create an extra [AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) which credentials are used for Terraform purposes. ::: diff --git a/optimize_sidebars.js b/optimize_sidebars.js index 852aafff509..be1b84c2989 100644 --- a/optimize_sidebars.js +++ b/optimize_sidebars.js @@ -1787,6 +1787,10 @@ module.exports = { "Install Camunda 8 on an EKS cluster", "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/eks-helm/" ), + docsLink( + "Dual-region setup", + "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/dual-region/" + ), docsLink( "IAM roles for service accounts", "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/irsa/" diff --git a/sidebars.js b/sidebars.js index e2fc81ac8ab..19eb4b04046 100644 --- a/sidebars.js +++ b/sidebars.js @@ -824,6 +824,7 @@ module.exports = { "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/eks-eksctl", "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/eks-terraform", "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/eks-helm", + "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/dual-region", "self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/irsa", ], }, diff --git a/versioned_docs/version-8.4/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md b/versioned_docs/version-8.4/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md index ca0b24e99f2..a634ea2f9c1 100644 --- a/versioned_docs/version-8.4/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md +++ b/versioned_docs/version-8.4/self-managed/platform-deployment/helm-kubernetes/platforms/amazon-eks/terraform-setup.md @@ -86,19 +86,21 @@ The [AWS Terraform provider](https://registry.terraform.io/providers/hashicorp/a There are several ways to authenticate the `AWS` provider. - (Recommended) Use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to configure access. Terraform will automatically default to AWS CLI configuration when present. -- Set environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, where the `key` and `id` can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). +- Set environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`, which can be retrieved from the [AWS Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). ::: :::warning -Do not use secrets in your configuration files. +Do not store sensitive information (credentials) in your Terraform files. ::: :::warning -The user who creates the resources will always be the owner. This means the user will always have admin access to the Kubernetes cluster until you delete it. Therefore, it can make sense to create an extra [AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) that's solely used for Terraform purposes. +A user who creates resources in AWS will therefore own these resources. In this particular case, the user will always have admin access to the Kubernetes cluster until the cluster is deleted. + +Therefore, it can make sense to create an extra [AWS IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) which credentials are used for Terraform purposes. :::