From 525244639674a20c2f18932d5aa1f11e1e8044c3 Mon Sep 17 00:00:00 2001 From: Giuliano Rodrigues Lima <91879848+grlimacan@users.noreply.github.com> Date: Fri, 5 Apr 2024 12:21:25 +0300 Subject: [PATCH] docs(Optimize): Technical documentation for Optimize with OpenSearch (#3564) * Technical documentation for Optimize with OpenSearch * doc(optimize): Corrected one finding from review * style(formatting): technical review * doc(optimize): Implementing suggestions from Review * doc(optimize): Implementing suggestions from technical review --------- Co-authored-by: Christina Ausley --- docs/reference/supported-environments.md | 2 +- .../platform-deployment/docker.md | 24 +++-- .../configuration/getting-started.md | 4 +- .../configuration/service-config.yaml | 73 ++++++++++++++ .../shared-elasticsearch-cluster.md | 20 ++-- .../configuration/system-configuration.md | 98 +++++++++++++++---- 6 files changed, 187 insertions(+), 34 deletions(-) diff --git a/docs/reference/supported-environments.md b/docs/reference/supported-environments.md index fd381409aca..3d983f0dfb2 100644 --- a/docs/reference/supported-environments.md +++ b/docs/reference/supported-environments.md @@ -89,7 +89,7 @@ Requirements for the components can be seen below: | Operate | OpenJDK 17+ | Elasticsearch 8.9+
Amazon OpenSearch 2.5.x | | Tasklist | OpenJDK 17+ | Elasticsearch 8.9+
Amazon OpenSearch 2.5.x | | Identity | OpenJDK 17+ | Keycloak 22.x, 23.x
PostgreSQL 14.x, 15.x or Amazon Aurora PostgreSQL 13.x, 14.x, 15.x (required for [certain features](/self-managed/identity/deployment/configuration-variables.md#database-configuration)) | -| Optimize | OpenJDK 17+ | Elasticsearch 8.9+ | +| Optimize | OpenJDK 17+ | Elasticsearch 8.9+
Amazon OpenSearch 2.5.x | | Connectors | OpenJDK 21+ | | | Web Modeler | - | PostgreSQL 13.x, 14.x, 15.x, 16.x or Amazon Aurora PostgreSQL 13.x, 14.x, 15.x, 16.x | diff --git a/docs/self-managed/platform-deployment/docker.md b/docs/self-managed/platform-deployment/docker.md index 72e6d0f1119..768d5a347bc 100644 --- a/docs/self-managed/platform-deployment/docker.md +++ b/docs/self-managed/platform-deployment/docker.md @@ -131,16 +131,21 @@ Some configuration properties are optional and have default values. See a descri | Name | Description | Default value | | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- | | SPRING_PROFILES_ACTIVE | Determines the mode Optimize is to be run in. For Self-Managed, set to `ccsm`. | +| CAMUNDA_OPTIMIZE_DATABASE | Determines the database Optimize will use. Allowed values: `elasticsearch` or `opensearch` | elasticsearch | | CAMUNDA_OPTIMIZE_IDENTITY_ISSUER_URL | The URL at which Identity can be accessed by Optimize. | | CAMUNDA_OPTIMIZE_IDENTITY_ISSUER_BACKEND_URL | The URL at which the Identity auth provider can be accessed by Optimize. This should match the configured provider in Identity and is to be used for container to container communication. | | CAMUNDA_OPTIMIZE_IDENTITY_CLIENTID | The Client ID used to register Optimize with Identity. | | CAMUNDA_OPTIMIZE_IDENTITY_CLIENTSECRET | The secret used when registering Optimize with Identity. | | CAMUNDA_OPTIMIZE_IDENTITY_AUDIENCE | The audience used when registering Optimize with Identity. | -| OPTIMIZE_ELASTICSEARCH_HOST | The address/hostname under which the Elasticsearch node is available. | localhost | -| OPTIMIZE_ELASTICSEARCH_HTTP_PORT | The port number used by Elasticsearch to accept HTTP connections. | 9200 | +| OPTIMIZE_ELASTICSEARCH_HOST\* | The address/hostname under which the Elasticsearch node is available. | localhost | +| OPTIMIZE_ELASTICSEARCH_HTTP_PORT\* | The port number used by Elasticsearch to accept HTTP connections. | 9200 | +| CAMUNDA_OPTIMIZE_OPENSEARCH_HOST\*\* | The address/hostname under which the OpenSearch node is available. | localhost | +| CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT \*\* | The port number used by OpenSearch to accept HTTP connections. | 9205 | | CAMUNDA_OPTIMIZE_SECURITY_AUTH_COOKIE_SAME_SITE_ENABLED | Determines if `same-site` is enabled for Optimize cookies. This must be set to `false`. | true | -| CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_USERNAME | The username for authentication in environments where a secured Elasticsearch connection is configured. | -| CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_PASSWORD | The password for authentication in environments where a secured Elasticsearch connection is configured. | +| CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_USERNAME \* | The username for authentication in environments where a secured Elasticsearch connection is configured. | +| CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_PASSWORD \* | The password for authentication in environments where a secured Elasticsearch connection is configured. | +| CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_USERNAME\*\* | The username for authentication in environments where a secured OpenSearch connection is configured. | +| CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_PASSWORD\*\* | The password for authentication in environments where a secured OpenSearch connection is configured. | | CAMUNDA_OPTIMIZE_ENTERPRISE | This should only be set to `true` if an Enterprise License has been acquired. | true | | CAMUNDA_OPTIMIZE_ZEEBE_ENABLED | Enables import of Zeebe data in Optimize. | false | | CAMUNDA_OPTIMIZE_ZEEBE_NAME | The record prefix for exported Zeebe records. | zeebe-record | @@ -149,6 +154,13 @@ Some configuration properties are optional and have default values. See a descri | SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI | Authentication for the Public REST API using a resource server to validate the JWT token. Complete URI to get public keys for JWT validation. | null | | OPTIMIZE_API_ACCESS_TOKEN | Authentication for the Public REST API using a static shared token. Will be ignored if SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI is also set. | null | +\* Only relevant when `CAMUNDA_OPTIMIZE_DATABASE` is either undefined or has the value `elasticsearch`.
+\*\* Only relevant when `CAMUNDA_OPTIMIZE_DATABASE` has the value `opensearch`. + +:::note +OpenSearch support in Optimize is limited to data import and the raw data report. The remaining functionality will be delivered with upcoming patches. +::: + Like for example this `docker-compose` configuration: ``` @@ -176,12 +188,12 @@ optimize: - OPTIMIZE_API_ACCESS_TOKEN=secret ``` -Self-Managed Optimize must be able to connect to Elasticsearch to write and read data. In addition, Optimize needs to connect to Identity for authentication purposes. Both of these requirements can be configured with the options described above. +Self-Managed Optimize must be able to connect to the configured database to write and read data. In addition, Optimize needs to connect to Identity for authentication purposes. Both of these requirements can be configured with the options described above. Optimize must also be configured as a client in Identity, and users will only be granted access to Optimize if they have a role that has `write:*` permission for Optimize. -For Optimize to import Zeebe data, Optimize must also be configured to be aware of the record prefix used when the records are exported to Elasticsearch. This can also be configured per the example above. +For Optimize to import Zeebe data, Optimize must also be configured to be aware of the record prefix used when the records are exported to the database. This can also be configured per the example above. ### Connectors diff --git a/optimize/self-managed/optimize-deployment/configuration/getting-started.md b/optimize/self-managed/optimize-deployment/configuration/getting-started.md index ddabf572223..e5b9abe36b6 100644 --- a/optimize/self-managed/optimize-deployment/configuration/getting-started.md +++ b/optimize/self-managed/optimize-deployment/configuration/getting-started.md @@ -12,9 +12,9 @@ You can see all supported values and read about logging configuration [here](./s Refer to the [configuration section on container settings](./system-configuration.md) for more information on how to adjust the Optimize web container configuration. -## Elasticsearch configuration +## Elasticsearch/OpenSearch configuration -You can customize the [Elasticsearch connection settings](./system-configuration.md#connection-settings) as well as the [index settings](./system-configuration.md#index-settings). +You can customize the [Elasticsearch/OpenSearch connection settings](./system-configuration.md#connection-settings) as well as the [index settings](./system-configuration.md#index-settings). ## Camunda 7 configuration diff --git a/optimize/self-managed/optimize-deployment/configuration/service-config.yaml b/optimize/self-managed/optimize-deployment/configuration/service-config.yaml index 003bc705958..f650474ba84 100644 --- a/optimize/self-managed/optimize-deployment/configuration/service-config.yaml +++ b/optimize/self-managed/optimize-deployment/configuration/service-config.yaml @@ -345,6 +345,79 @@ es: # process instance can contain. This limit helps to prevent out of memory errors and should be used with care. nested_documents_limit: 10000 +# everything that is related with configuring OpenSearch or creating +# a connection to it. +opensearch: + connection: + # Maximum time without connection to OpenSearch, Optimize should + # wait until a timeout triggers. + timeout: 10000 + # Maximum size of the OpenSearch response consumer heap buffer. + responseConsumerBufferLimitInMb: 100 + # The path prefix under which OpenSearch is available + pathPrefix: "" + # a list of OpenSearch nodes Optimize can connect to. If you have built + # an OpenSearch cluster with several nodes it is recommended to define + # several connection points in case one node fails. + nodes: + # the address/hostname under which the OpenSearch node is available. + - host: "localhost" + # A port number used by OpenSearch to accept HTTP connections. + httpPort: 9200 + # Determines whether the hostname verification should be skipped + skipHostnameVerification: false + # Configuration relating to OS backup + backup: + # The repository name in which the backups should be stored + repositoryName: "" + # OpenSearch security settings + security: + # the basic auth (x-pack) username + username: null + # the basic auth (x-pack) password + password: null + # SSL/HTTPS secured connection settings + ssl: + # path to a PEM encoded file containing the certificate (or certificate chain) + # that will be presented to clients when they connect. + certificate: null + # A list of paths to PEM encoded CA certificate files that should be trusted, e.g. ['/path/to/ca.crt']. + # Note: if you are using a public CA that is already trusted by the Java runtime, + # you do not need to set the certificate_authorities. + certificate_authorities: [] + # used to enable or disable TLS/SSL for the HTTP connection + enabled: false + # used to specify that the certificate was self-signed + selfSigned: false + # Maximum time in seconds a request to opensearch should last, before a timeout + # triggers. + scrollTimeoutInSeconds: 60 + settings: + # the maximum number of buckets returned for an aggregation + aggregationBucketLimit: 1000 + index: + # the prefix prepended to all Optimize index and alias names + # NOTE: Changing this after Optimize was already run before, will create new empty indexes + prefix: "optimize" + # How often should the data replicated in case of node failure. + number_of_replicas: 1 + # How many shards should be used in the cluster for process instance and decision instance indices. + # All other indices will be made up of a single shard + # NOTE: this property only applies the first time Optimize is started and + # the schema/mapping is deployed on OpenSearch. If you want to take + # this property to take effect again, you need to delete all indexes (with it all data) + # and restart Optimize. This configuration will also only be applied to the current write instance indices. Archive + # indices will have a single shard regardless + number_of_shards: 1 + # How long OpenSearch waits until the documents are available + # for search. A positive value defines the duration in seconds. + # A value of -1 means that a refresh needs to be done manually. + refresh_interval: 2s + # Optimize uses nested documents to store list information such as activities or variables belonging to a + # process instance. So this setting defines the maximum number of activities/variables that a single + # process instance can contain. This limit helps to prevent out of memory errors and should be used with care. + nested_documents_limit: 10000 + plugin: # Defines the directory path in the local Optimize file system which should be checked for plugins directory: "./plugin" diff --git a/optimize/self-managed/optimize-deployment/configuration/shared-elasticsearch-cluster.md b/optimize/self-managed/optimize-deployment/configuration/shared-elasticsearch-cluster.md index 27a569b66e0..67ad1883210 100644 --- a/optimize/self-managed/optimize-deployment/configuration/shared-elasticsearch-cluster.md +++ b/optimize/self-managed/optimize-deployment/configuration/shared-elasticsearch-cluster.md @@ -1,21 +1,27 @@ --- id: shared-elasticsearch-cluster -title: "Shared Elasticsearch cluster" -description: "Operate multiple Optimize instances on a shared Elasticsearch cluster." +title: "Shared Elasticsearch/OpenSearch cluster" +description: "Operate multiple Optimize instances on a shared Elasticsearch/OpenSearch cluster." --- -In case you have a large shared Elasticsearch cluster that you want to operate multiple Optimize instances on that are intended to run in complete isolation from each other, it is required to change the [`es.settings.index.prefix`](./system-configuration.md#index-settings) setting for each Optimize instance. +In case you have a large shared Elasticsearch/OpenSearch cluster that you want to operate multiple Optimize instances on that are intended to run in complete isolation from each other, it is required to change the [`*.settings.index.prefix`](./system-configuration.md#index-settings) setting for each Optimize instance. :::note Heads Up! -Although a shared Elasticsearch cluster setup is possible, it's recommended to operate a dedicated Elasticsearch cluster per Optimize instance. +Although a shared Elasticsearch/OpenSearch cluster setup is possible, it's recommended to operate a dedicated Elasticsearch/OpenSearch cluster per Optimize instance. -This is due to the fact that a dedicated cluster provides the highest reliability (no resource sharing and no breaking side effects due to misconfiguration) and flexibility (e.g. Elasticsearch and/or Optimize updates can be performed independently between different Optimize setups). +This is due to the fact that a dedicated cluster provides the highest reliability (no resource sharing and no breaking side effects due to misconfiguration) and flexibility (e.g. Elasticsearch/OpenSearch and/or Optimize updates can be performed independently between different Optimize setups). ::: -The following illustration demonstrates this use case with two Optimize instances that connect to the same Elasticsearch cluster but are configured with different `es.settings.index.prefix` values. This results in different indexes and aliases created on the cluster, strictly isolating the data of both Optimize instances, so no instance accesses the data of the other instance. +The following illustration demonstrates this use case with two Optimize instances that connect to the same Elasticsearch/OpenSearch cluster but are configured with different `*.settings.index.prefix` values. This results in different indexes and aliases created on the cluster, strictly isolating the data of both Optimize instances, so no instance accesses the data of the other instance. :::note Warning -Changing the value of `es.settings.index.prefix` after an instance was already running results in new indexes being created with the new prefix value. There is no support in migrating data between indexes based on different prefixes. +Changing the value of `*.settings.index.prefix` after an instance was already running results in new indexes being created with the new prefix value. There is no support in migrating data between indexes based on different prefixes. ::: +:::note +OpenSearch support is currently only available for `ccsm` mode. Moreover, OpenSearch support in Optimize is limited to data import and the raw data report. The remaining functionality will be delivered with upcoming patches. +::: + +\* Elasticsearch index prefix settings path: `es.settings.index.prefix`
\* OpenSearch index prefix settings path: `opensearch.settings.index.prefix` + ![Shared Elasticsearch Cluster Setup](img/shared-elasticsearch-cluster.png) diff --git a/optimize/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize/self-managed/optimize-deployment/configuration/system-configuration.md index 0460bec3b90..25af920729a 100644 --- a/optimize/self-managed/optimize-deployment/configuration/system-configuration.md +++ b/optimize/self-managed/optimize-deployment/configuration/system-configuration.md @@ -128,7 +128,7 @@ Settings related to embedded Jetty container, which serves the Optimize applicat ### Elasticsearch -Settings related to Elasticsearch. +These settings are only relevant when operating Optimize with Elasticsearch. #### Connection settings @@ -181,6 +181,68 @@ Define a secured connection to be able to communicate with a secured Elasticsear | ------------------------ | ------------- | ------------------------------------------------------------------------ | | es.backup.repositoryName | "" | The name of the snapshot repository to be used to back up Optimize data. | +### OpenSearch + +These settings are only relevant when operating Optimize with OpenSearch. + +:::note +OpenSearch support is currently only available for `ccsm` mode. Moreover, OpenSearch support in Optimize is limited to data import and the raw data report. The remaining functionality will be delivered with upcoming patches. +::: + +#### Connection settings + +This section details everything related to building the connection to OpenSearch. + +:::note +You can define a number of connection points in a cluster. Therefore, everything under `opensearch.connection.nodes` is a list of nodes Optimize can connect to. If you have built an OpenSearch cluster with several nodes, it is recommended to define several connection points so if one node fails, Optimize is still able to talk to the cluster. +::: + +| YAML path | Default value | Description | +| ----------------------------------------------------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| opensearch.connection.timeout | 10000 | Maximum time without connection to OpenSearch that Optimize should wait until a timeout triggers. | +| opensearch.connection.responseConsumerBufferLimitInMb | 100 | Maximum size of the OpenSearch response consumer heap buffer. This can be increased to resolve errors from OpenSearch relating to the entity content being too long. | +| opensearch.connection.pathPrefix | | The path prefix under which OpenSearch is available. | +| opensearch.connection.nodes[*].host | localhost | The address/hostname under which the OpenSearch node is available. | +| opensearch.connection.nodes[*].httpPort | 9200 | A port number used by OpenSearch to accept HTTP connections. | +| opensearch.connection.proxy.enabled | false | Whether an HTTP proxy should be used for requests to OpenSearch. | +| opensearch.connection.proxy.host | null | The proxy host to use, must be set if `opensearch.connection.proxy.enabled = true`. | +| opensearch.connection.proxy.port | null | The proxy port to use, must be set if `opensearch.connection.proxy.enabled = true`. | +| opensearch.connection.proxy.sslEnabled | false | Whether this proxy is using a secured connection (HTTPS). | +| opensearch.connection.skipHostnameVerification | false | Determines whether the hostname verification should be skipped. | + +#### Index settings + +| YAML path | Default value | Description | +| ------------------------------------------------ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| opensearch.settings.index.prefix | optimize | The prefix prepended to all Optimize index and alias `namopensearch`. Custom values allow you to operate multiple isolated Optimize instances on one OpenSearch cluster.

NOTE: Changing this after Optimize has already run will create new empty indexes. | +| opensearch.settings.index.number_of_replicas | 1 | How often data should be replicated to handle node failures. | +| opensearch.settings.index.number_of_shards | 1 | How many shards should be used in the cluster for process instance and decision instance indices. All other indices will be made up of a single shard.

NOTE: This property only applies the first time Optimize is started and the schema/mapping is deployed on OpenSearch. If you want this property to take effect again, you need to delete all indices (and with that all data) and restart Optimize. | +| opensearch.settings.index.refresh_interval | 2s | How long OpenSearch waits until the documents are available for search. A positive value defines the duration in seconds. A value of -1 means a refresh needs to be done manually. | +| opensearch.settings.index.nested_documents_limit | 10000 | Optimize uses nested documents to store list information such as activities or variables belonging to a process instance. This setting defines the maximum number of activities, variables, or incidents that a single process instance can contain. This limit helps to prevent out of memory errors and should be used with care. For more information, refer to the OpenSearch documentation on this topic. | + +#### OpenSearch security + +Define a secured connection to be able to communicate with a secured OpenSearch instance. + +| YAML path | Default value | Description | +| ----------------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| opensearch.security.username | | The basic authentication (x-pack) username. | +| opensearch.security.password | | The basic authentication (x-pack) password. | +| opensearch.security.ssl.enabled | false | Used to enable or disable TLS/SSL for the HTTP connection. | +| opensearch.security.ssl.certificate | | The path to a PEM encoded file containing the certificate (or certificate chain) that will be presented to clients when they connect. | +| opensearch.security.ssl.certificate_authorities | [ ] | A list of paths to PEM encoded CA certificate files that should be trusted, for example ['/path/to/ca.crt'].

NOTE: if you are using a public CA that is already trusted by the Java runtime, you do not need to set the certificate_authorities. | +| opensearch.security.ssl.selfSigned | false | Used to specify that the certificate was self-signed. | + +#### OpenSearch backup settings + +| YAML path | Default value | Description | +| -------------------------------- | ------------- | ------------------------------------------------------------------------ | +| opensearch.backup.repositoryName | "" | The name of the snapshot repository to be used to back up Optimize data. | + +:::note +The backup functionality is not yet supported for OpenSearch. +::: + ### Email Settings for the email server to send email notifications, e.g. when an alert is triggered. @@ -230,20 +292,20 @@ Settings for automatic cleanup of historic process/decision instances based on t Two types of history cleanup are available for Camunda 8 users at this time - process data cleanup and external variable cleanup. For more information, see [History cleanup](/optimize/self-managed/optimize-deployment/configuration/history-cleanup.md). ::: -| YAML path | Default value | Description | -| -------------------------------------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| historyCleanup.cronTrigger | `'0 1 * * *'` | Cron expression to schedule when the cleanup should be executed, defaults to 01:00 A.M. As the cleanup can cause considerable load on the underlying Elasticsearch database it is recommended to schedule it outside of office hours. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here. For details on the format please refer to: [Cron Expression Description](https://en.wikipedia.org/wiki/Cron) or [Spring Cron Expression Documentation](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/scheduling/support/CronSequenceGenerator.html) | -| historyCleanup.ttl | 'P2Y' | Global time to live (ttl) period for process/decision/event data. The relevant property differs between entities. For process data, it's the `endTime` of the process instance. For decision data, it's the `evaluationTime` and for ingested events it's the `time` field. The format of the string is ISO_8601 duration. The default value is 2 years. For details on the notation refer to: [https://en.wikipedia.org/wiki/ISO_8601#Durations](https://en.wikipedia.org/wiki/ISO_8601#Durations) Note: The time component of the ISO_8601 duration is not supported. Only years (Y), months (M) and days (D) are. | -| historyCleanup.processDataCleanup.enabled | false | A switch to activate the history cleanup of process data. \[true/false\] | -| historyCleanup.processDataCleanup.cleanupMode | 'all' | Global type of the cleanup to perform for process instances, possible values: 'all' - delete everything related and including the process instance that passed the defined ttl 'variables' - only delete variables of a process instance Note: This doesn't affect the decision instance cleanup which always deletes the whole instance. | -| historyCleanup.processDataCleanup.batchSize | 10000 | Defines the batch size in which Camunda engine process instance data gets cleaned up. It may be reduced if requests fail due to request size constraints. In most cases, this should not be necessary and has only been experienced when connecting to an AWS Elasticsearch instance. | -| historyCleanup.processDataCleanup.perProcessDefinitionConfig | | A list of process definition specific configuration parameters that will overwrite the global cleanup settings for the specific process definition identified by its ${key}. | -| historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.ttl | | Time to live to use for process instances of the process definition with the ${key}. | -| historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.cleanupMode | | Cleanup mode to use for process instances of the process definition with the ${key}. | -| historyCleanup.decisionDataCleanup.enabled | false | A switch to activate the history cleanup of decision data. \[true/false\] | -| historyCleanup.decisionDataCleanup.perDecisionDefinitionConfig | | A list of decision definition specific configuration parameters that will overwrite the global cleanup settings for the specific decision definition identified by its ${key}. | -| historyCleanup.decisionDataCleanup .perDecisionDefinitionConfig.${key}.ttl | | Time to live to use for decision instances of the decision definition with the ${key}. | -| historyCleanup.ingestedEventCleanup.enabled | false | A switch to activate the history cleanup of ingested event data. \[true/false\] | +| YAML path | Default value | Description | +| -------------------------------------------------------------------------------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| historyCleanup.cronTrigger | `'0 1 * * *'` | Cron expression to schedule when the cleanup should be executed, defaults to 01:00 A.M. As the cleanup can cause considerable load on the underlying database it is recommended to schedule it outside of office hours. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here. For details on the format please refer to: [Cron Expression Description](https://en.wikipedia.org/wiki/Cron) or [Spring Cron Expression Documentation](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/scheduling/support/CronSequenceGenerator.html) | +| historyCleanup.ttl | 'P2Y' | Global time to live (ttl) period for process/decision/event data. The relevant property differs between entities. For process data, it's the `endTime` of the process instance. For decision data, it's the `evaluationTime` and for ingested events it's the `time` field. The format of the string is ISO_8601 duration. The default value is 2 years. For details on the notation refer to: [https://en.wikipedia.org/wiki/ISO_8601#Durations](https://en.wikipedia.org/wiki/ISO_8601#Durations) Note: The time component of the ISO_8601 duration is not supported. Only years (Y), months (M) and days (D) are. | +| historyCleanup.processDataCleanup.enabled | false | A switch to activate the history cleanup of process data. \[true/false\] | +| historyCleanup.processDataCleanup.cleanupMode | 'all' | Global type of the cleanup to perform for process instances, possible values: 'all' - delete everything related and including the process instance that passed the defined ttl 'variables' - only delete variables of a process instance Note: This doesn't affect the decision instance cleanup which always deletes the whole instance. | +| historyCleanup.processDataCleanup.batchSize | 10000 | Defines the batch size in which Camunda engine process instance data gets cleaned up. It may be reduced if requests fail due to request size constraints. In most cases, this should not be necessary and has only been experienced when connecting to an AWS Elasticsearch instance. | +| historyCleanup.processDataCleanup.perProcessDefinitionConfig | | A list of process definition specific configuration parameters that will overwrite the global cleanup settings for the specific process definition identified by its ${key}. | +| historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.ttl | | Time to live to use for process instances of the process definition with the ${key}. | +| historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.cleanupMode | | Cleanup mode to use for process instances of the process definition with the ${key}. | +| historyCleanup.decisionDataCleanup.enabled | false | A switch to activate the history cleanup of decision data. \[true/false\] | +| historyCleanup.decisionDataCleanup.perDecisionDefinitionConfig | | A list of decision definition specific configuration parameters that will overwrite the global cleanup settings for the specific decision definition identified by its ${key}. | +| historyCleanup.decisionDataCleanup .perDecisionDefinitionConfig.${key}.ttl | | Time to live to use for decision instances of the decision definition with the ${key}. | +| historyCleanup.ingestedEventCleanup.enabled | false | A switch to activate the history cleanup of ingested event data. \[true/false\] | ### Localization @@ -280,9 +342,9 @@ Customize the Optimize UI e.g. by adjusting the logo, head background color etc. Configuration of initial telemetry settings. -| YAML path | Default value | Description | -| ----------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| telemetry.initializeTelemetry | false | Decides whether telemetry is initially enabled or disabled when Optimize starts. Thereafter, telemetry can be turned on and off in the UI by superusers. If enabled, information about the setup and usage of the Optimize is sent to remote Camunda servers for the sake of analytical evaluation. When enabled, the following information is sent every 24 hours: Optimize version, License Key, Optimize installation ID, Elasticsearch version.

Legal note: Before you install Camunda Optimize version >= 3.2.0 or activate the telemetric functionality, please make sure that you are authorized to take this step, and that the installation or activation of the telemetric functionality is not in conflict with any internal company policies, compliance guidelines, any contractual or other provisions or obligations of your company. Camunda cannot be held responsible in the event of unauthorized installation or activation of this function. | +| YAML path | Default value | Description | +| ----------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| telemetry.initializeTelemetry | false | Decides whether telemetry is initially enabled or disabled when Optimize starts. Thereafter, telemetry can be turned on and off in the UI by superusers. If enabled, information about the setup and usage of the Optimize is sent to remote Camunda servers for the sake of analytical evaluation. When enabled, the following information is sent every 24 hours: Optimize version, License Key, Optimize installation ID, Database version.

Legal note: Before you install Camunda Optimize version >= 3.2.0 or activate the telemetric functionality, please make sure that you are authorized to take this step, and that the installation or activation of the telemetric functionality is not in conflict with any internal company policies, compliance guidelines, any contractual or other provisions or obligations of your company. Camunda cannot be held responsible in the event of unauthorized installation or activation of this function. | ### Other