diff --git a/optimize/apis-tools/optimize-api/event-ingestion.md b/optimize/apis-tools/optimize-api/event-ingestion.md
deleted file mode 100644
index 6e43d73fd6c..00000000000
--- a/optimize/apis-tools/optimize-api/event-ingestion.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-id: event-ingestion
-title: "Event ingestion"
-description: "The REST API to ingest external events into Optimize."
----
-
-Camunda 7 only
-
-The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](components/userguide/additional-features/event-based-processes.md) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
-
-## Functionality
-
-The Event Ingestion REST API has the following functionality:
-
-1. Ingest new event data in batches, see the example on [ingesting three cloud events](#ingest-cloud-events).
-2. Reingest/override previously ingested events, see the example on [reingesting cloud events](#reingest-cloud-events).
-
-## CloudEvents compliance
-
-To provide the best interoperability possible, the Optimize Event Ingestion REST API implements the [CloudEvents Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/spec.md) specification, which is hosted by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/).
-
-In particular, the Optimize Event Ingestion REST API is a CloudEvents consumer implemented as an HTTP Web Hook, as defined by the [CloudEvents HTTP 1.1 Web Hooks for Event Delivery - Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md) specification. Following the [Structured Content Mode](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md#32-structured-content-mode) of the [HTTP Protocol Binding for CloudEvents - Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md), event context attributes and event data is encoded in the [JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) of the [CloudEvents JSON Event Format Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/json-format.md).
-
-## Authentication
-
-As required by the [CloudEvents HTTP 1.1 Web Hooks for Event Delivery - Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#3-authorization) specification, every [Event Ingestion REST API Request](#method-and-http-target-resource) needs to include an authentication token as an [`Authorization`](https://tools.ietf.org/html/rfc7235#section-4.2) request header.
-
-Details on how to configure and pass this token can be found [here](./optimize-api-authentication.md).
-
-## Method and HTTP target resource
-
-POST `/api/ingestion/event/batch`
-
-## Request headers
-
-The following request headers have to be provided with every ingest request:
-
-| Header | Constraints | Value |
-| -------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------- |
-| Authentication | REQUIRED | See [authentication](./optimize-api-authentication.md) |
-| Content-Length | REQUIRED | Size in bytes of the entity-body, also see [Content-Length](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length). |
-| Content-Type | REQUIRED | Must be one of: `application/cloudevents-batch+json` or `application/json` |
-
-## Request body
-
-[JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) compliant JSON Array of CloudEvent JSON Objects:
-
-| Name | Type | Constraints | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
-| [ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
-| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
-| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
-| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) instance later on. |
-| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](components/userguide/additional-features/event-based-processes.md#external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
-
-The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) where an event containing this as `data` was mapped:
-
-```
- {
- "reviewSuccessful": true,
- "amount": 10.5,
- "customerId": "lovelyCustomer1"
- }
-```
-
-Nested objects, such as `customer` in this example, would not be available as a variable in event-based processes where an event containing this as `data` value was mapped:
-
-```
- {
- "customer": {
- "firstName":"John",
- "lasTName":"Doe"
- }
- }
-```
-
-## Result
-
-This method returns no content.
-
-## Response codes
-
-Possible HTTP response status codes:
-
-| Code | Description |
-| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 204 | Request successful |
-| 400 | Returned if some of the properties in the request body are invalid or missing. |
-| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
-| 403 | The Event Based Process feature is not enabled. |
-| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](self-managed/optimize-deployment/configuration/event-based-processes.md#event-ingestion-rest-api-configuration) for information on how to configure this limit. |
-| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
-
-## Example
-
-### Ingest cloud events
-
-#### Request
-
-POST `/api/ingestion/event/batch`
-
-##### Request header
-
-`Authorization: Bearer mySecret`
-
-##### Request body
-
-```json
-[
- {
- "specversion": "1.0",
- "id": "1edc4160-74e5-4ffc-af59-2d281cf5aca341",
- "source": "order-service",
- "type": "orderCreated",
- "time": "2020-01-01T10:00:00.000Z",
- "traceid": "id1",
- "group": "shop",
- "data": {
- "numberField": 1,
- "stringField": "example"
- }
- },
- {
- "specversion": "1.0",
- "id": "1edc4160-74e5-4ffc-af59-2d281cf5aca342",
- "source": "order-service",
- "type": "orderValidated",
- "time": "2020-01-01T10:00:10.000Z",
- "traceid": "id1",
- "group": "shop",
- "data": {
- "numberField": 1,
- "stringField": "example"
- }
- },
- {
- "specversion": "1.0",
- "id": "1edc4160-74e5-4ffc-af59-2d281cf5aca343",
- "source": "shipping-service",
- "type": "packageShipped",
- "traceid": "id1",
- "group": "shop",
- "time": "2020-01-01T10:00:20.000Z"
- }
-]
-```
-
-#### Response
-
-Status 204.
-
-### Reingest cloud events
-
-The API allows you to update any previously ingested cloud event by ingesting an event using the same event `id`.
-
-The following request would update the first cloud event that got ingested in the [ingest three cloud events sample](#ingest-cloud-events). Note that on an update, the cloud event needs to be provided as a whole; it's not possible to perform partial updates through this API.
-
-In this example, an additional field `newField` is added to the data block of the cloud event with the ID `1edc4160-74e5-4ffc-af59-2d281cf5aca341`.
-
-#### Request
-
-POST `/api/ingestion/event/batch`
-
-##### Request header
-
-`Authorization: Bearer mySecret`
-
-##### Request Body:
-
-```
- [
- {
- "specversion": "1.0",
- "id": "1edc4160-74e5-4ffc-af59-2d281cf5aca341",
- "source": "order-service",
- "type": "orderCreated",
- "time": "2020-01-01T10:00:00.000Z",
- "traceid": "id1",
- "group": "shop",
- "data": {
- "numberField": 1,
- "stringField": "example",
- "newField": "allNew"
- }
- }
- ]
-```
-
-#### Response
-
-Status 204.
diff --git a/optimize/components/userguide/additional-features/event-based-processes.md b/optimize/components/userguide/additional-features/event-based-processes.md
deleted file mode 100644
index f8198a5fd61..00000000000
--- a/optimize/components/userguide/additional-features/event-based-processes.md
+++ /dev/null
@@ -1,248 +0,0 @@
----
-id: event-based-processes
-title: Event-based processes
-description: Create and analyze reports backed by ingested events.
----
-
-Camunda 7 only
-
-## Overview
-
-Event-based processes are BPMN processes that can be created inside Optimize and based on events. These events can be loaded from an external system or created from internal BPMN processes. They are particularly useful to create reports and dashboards based on a process that is not fully automated with Camunda 7 yet.
-
-Once the event-based process feature is correctly configured, you will see a new link in the navigation to go to the event-based process list. From there, you can see, create, or edit your event-based processes.
-
-:::note
-When Camunda activity events are used in event-based processes, Camunda admin authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](#publishing-an-event-based-process) or at any time via the [edit access option](#event-based-process-list---edit-access) in the event-based process list.
-
-Visit our [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) on authorization management and event-based processes for the reasoning behind this behavior.
-:::
-
-## Set up
-
-You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) for more information.
-
-## Event-based process list
-
-All currently available event-based processes are listed under the main navigation item **Event-based processes**. From there, it is possible to see their state, which can be one of the following:
-
-- `Unmapped` - The process model is created, but no single event is mapped to a flow node.
-- `Mapped` - The process model contains at least one mapping of an event to a flow node.
-- `Published` - The event-based process is published and can be used in reports by users that are authorized to access it.
-- `Unpublished Changes` - The process model contains changes that are not reflected in the currently published state of the event-based process; it needs to get republished manually.
-
-![Process List](./img/processList.png)
-
-### Event-based process list - edit access
-
-To manage authorizations for a published event-based process, the **Edit Access** option in the dropdown menu of each event-based process list entry allows you to authorize users or groups to create reports for these processes in Optimize.
-
-![Process List - Edit Access](./img/editAccess.png)
-
-## Creating an event-based process
-
-There are three ways to create an event-based process:
-
-### Auto-generate
-
-:::note
-The process auto-generation feature is currently in early beta stage.
-:::
-
-The first way to create an event-based process is to allow Optimize to auto-generate the model based on provided configuration. Using this option, you can specify which event sources should be used for the process, including both Camunda and external events.
-
-Note that for external events, it is currently only possible to select all the external events.
-
-![Autogenerate a process](./img/auto-generation.png)
-
-Optimize will attempt to generate an overall model based on these sources, determining the order of events in the model by sampling stored instances. After auto-generation is complete, you will see the process in [view mode](#view-mode), with the model's nodes fully mapped to their corresponding events.
-
-To make changes to the autogenerated process, modify either the model itself, the process name, or the process mappings in the same way as any other event-based process by entering [edit mode](#edit-mode).
-
-### Model a process
-
-The second way to create an event-based process is to model it manually using the integrated BPMN modeler.
-
-### Upload BPMN model
-
-Finally, you can create an event-based process by uploading a `.bpmn` file directly into Optimize.
-
-## Edit mode
-
-![Edit Mode](./img/editMode.png)
-
-The edit mode allows you to build and map your event-based process. Using this mode, you can perform all kinds of operations, such as:
-
-- Rename the process.
-- Model the process using the integrated BPMN modeler.
-- Map your diagram nodes to an event from the event table.
-- Edit event sources for the events to display in the event table.
-- Save the current state with your applied changes.
-- Cancel changes you already applied to the process.
-
-### Modeling
-
-Modeling can be done using the integrated modeler shown in the screenshot above. To maximize the modeling area, collapse the table during the modeling by clicking on the **Collapse** button in the top right of the table.
-
-### Event sources
-
-To map BPMN nodes to events, add event sources to the process first by clicking the **Add Event Sources** button available at the top of the table.
-
-In this view, it is possible to add two types of events to the events list:
-
-#### External events
-
-Events that were ingested into Optimize from an external system. These events can be imported into Optimize using the event ingestion API Optimize provides.
-
-Defining the `group` property when ingesting the events will allow selecting events that belong to a group. If the group property is not defined or left empty during ingestion of an event, Optimize will consider it `ungrouped`.
-
-![Selecting External Events](./img/externalEvents.png)
-
-#### Camunda events
-
-![Add Source Modal](./img/sourceModal.png)
-
-These are events generated from an existing Camunda BPMN process. Only processes for which Optimize has imported at least one event will be visible for selection. This means the process has to have at least one instance and Optimize has to have been configured to import data from that process.
-
-See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md#use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
-
-To add such events, provide the following details:
-
-- The target process definition that you would like to generate the events from
-
-- The trace ID location: A trace ID uniquely identifies a process instance across system boundaries. One example would be an invoice number for an invoice handling process. For a Camunda process, it is possible to select a trace ID that exists either in a variable or in the process business key.
-
-- Which events to display in the table:
-
-Adding events for every flow node might not be necessary for the event-based process. Therefore, we provide the ability to only add the events that are necessary. There are three options available:
-
-- Process start and end: This will add only two events in the table, one event is triggered when the process starts and one when it ends.
-
-- Start and end flow node events: The number of events added to the table will depend on how many start and end events are in the process. For example, if there is one start event and two end events, three events will be added.
-
-- Start and end flow node events: This option will add events for every flow node in the process.
-
-Once this information is defined and the sources are added, the events will appear in the table as shown below.
-
-![Events Table](./img/eventsTable.png)
-
-#### Events table
-
-Each event in the table will have the following properties:
-
-- Mapped as (start/end): Defines whether the event indicates start of BPMN node or the end of it.
-
-- Event name
-
-- Group
-
- - For external events, this corresponds to the group of the ingested event.
- - For Camunda process events, this corresponds to the name of the process definition.
-
-- Source: External system or Camunda process event.
-
-- Count: How many times this event was triggered. See [additional notes](#event-counts) for more information.
-
-To assist during event mapping, the events table offers suggestions of potential events to be mapped based on the selected node. This is indicated by a blue strap near the suggested event. The event suggestion only works when adding all external events as a source with no Camunda events.
-
-### Mapping events
-
-Mapping is the process of linking BPMN flow nodes to events.
-
-To start mapping, take the following steps:
-
-1. Select the node that you would like to map from the diagram.
-2. To link the selected node to an event, enable the checkbox of that event from the table. Afterwards, a checkmark sign will be shown on top of the node to indicate that the event has been mapped successfully.
-
-:::note
-Not all BPMN nodes can be mapped. Only events and activities can be mapped to events.
-:::
-
-Once all the necessary nodes are mapped, you can save your diagram to go the view mode.
-
-## View mode
-
-The view mode gives you a quick overview of which flow nodes have been mapped to events and allows you to enter the edit mode, publish, or delete the current event-based process.
-
-![View Mode of event-based processes](./img/processView.png)
-
-### Publishing an event-based process
-
-Once you have built and mapped your event-based process, you need to publish it to make it available for reports and dashboards. To publish your process, click the **Publish** button in the view mode of your event-based process.
-
-![Publish modal](./img/publishModal.png)
-
-In the shown modal, you can see who will have access to use the event-based process. By default, the process is only available to the user who created it. If you would like to allow other users to use the process in reports, click **Change...** to open the permissions options.
-
-![permissions modal](./img/usersModal.png)
-
-In this modal, it is possible to search for users and groups and add them to the list of users who have access to the process. Once that is done, you can save the changes and publish your process.
-
-Publishing the process will take some time to correlate all events and generate your event-based process. Once the publishing is done, a notification will appear indicating this.
-
-Now the process is ready and can be used like any other process to create reports and dashboards.
-
-## External ingested events
-
-After ingesting events into Optimize from an external system, each individual event will appear in the external events table.
-
-![External Events](./img/external-events.png)
-
-By default, the table shows all ingested events sorted by the timestamp from newest to oldest. However, it is also possible to search for events or sort the results by event name, source, group, or trace ID.
-
-### Deleting ingested events
-
-One or multiple events can be selected and deleted as shown in the figure below:
-
-![Deleting External Events](./img/deleting-events.png)
-
-:::note
-When deleting an event mapped to a published event-based process, only the corresponding flow node instance will be removed from the process and no change will happen on the process instance level until the process is republished.
-
-For example, if you delete an ingested event that was mapped to the only end event within a process, the corresponding process instance will still be considered complete until the process is republished.
-:::
-
-## Additional notes
-
-### Event-based process auto-generation
-
-Event-based process auto-generation attempts to determine the order of events based on a sample of stored instances. Due to the nature of sampling, it is possible that the generated model may not always appear as you might expect.
-
-In some cases, it is possible that some sequence flows may be hidden by overlapping elements on the generated model.
-
-If both an event source and an embedded subprocess contained within that source are included for auto-generation, they will appear in the auto-generated model as independent processes.
-
-In the case where external events are configured as an event source, it is possible that Optimize will not be able to determine a model containing all external events. In this scenario,
-Optimize will auto-generate a model containing only the external events that it could determine the order of.
-
-In any of the above scenarios, you are able to correct the model to suit your needs using the editor. Like any other event-based process, an auto-generated model can be edited so you can make any necessary corrections after auto-generation is complete.
-
-### Published event-based processes
-
-In some scenarios, reports created using event-based processes might not show all the information expected.
-
-To avoid this, we encourage you to avoid including the following elements when modelling your event-based processes:
-
-- Inclusive gateways: These may be modeled in an event-based process diagram. However, visual data flow will be interrupted on reports such as heatmaps.
-
-![Inclusive Gateway](./img/inclusive_gateway.png)
-
-- Complex gateways: These may be modeled in an event-based process diagram. However, visual data flow will be interrupted on reports such as heatmaps.
-
-![Complex Gateway](./img/complex_gateway.png)
-
-- Mixed gateway directions: Mixed gateways are gateways which have no clear direction, instead being a combination of opening and closing gateways. These may be modeled in an event-based process diagram. However, visual data flow will be interrupted on reports such as heatmaps.
-
-![Mixed Direction Gateway](./img/mixed_direction_gateway.png)
-
-- Chained gateways: A chained gateway is one that occurs as part of a sequence of consecutive gateways. These may be modeled in an event-based process diagram. However, visual data flow will be interrupted on reports such as heatmaps.
-
-![Chained Gateway](./img/chained_gateway.png)
-
-### Event counts
-
-Event counts in the table may not match the values you expected. There are three possible explanations for this:
-
-- If you have enabled history cleanup, the counts will still include events from process instances that have since been cleaned up.
-- For events from Camunda processes, the count value represents the number of times that event has occurred across all versions and tenants of that process, regardless of how the event source is configured.
-- The counts for external events will still include ingested events that have since been deleted using the [event inspection feature](#deleting-ingested-events).
diff --git a/optimize/components/userguide/combined-process-reports.md b/optimize/components/userguide/combined-process-reports.md
deleted file mode 100644
index bcd202f4875..00000000000
--- a/optimize/components/userguide/combined-process-reports.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-id: combined-process-reports
-title: Combined process reports
-description: Occasionally, it is necessary to compare multiple reports or visualize them together in one diagram.
----
-
-Camunda 7 only
-
-Occasionally, it is necessary to compare multiple reports or visualize them together in one diagram. This can be achieved by creating a special type of report called a combined process report. To create a new combined process report, visit the **Collections** page and click **Create New > New Report > Combined Process Report**.
-
-![Creating a Combined process report](./img/combined-report-create.png)
-
-Afterward, you'll be directed to the combined process report builder. Here, navigate the selection panel on the right to choose multiple reports for combination.
-
-:::note
-If the combined process report resides within a collection, only reports in the same collection can be combined. If the combined process report is not part of a collection, it can only combine reports that are also not in a collection.
-:::
-
-A preview of the selected reports will appear in the panel on the left:
-
-![combined process report builder](./img/combined-report.png)
-
-For instance, combining two reports with a table visualization yields the following view:
-
-![Combining two reports with a table visualization](./img/table-report.png)
-
-And combining two reports with line chart visualization results in the following view:
-
-![Combining two reports with line chart visualization](./img/area-chart-report.png)
-
-You can modify the color of chart reports by clicking on the color box near the report's name. Additionally, you can rearrange items in the list of selected reports to change their order in the report view.
-
-:::note
-Not all reports can be combined due to differences in their configurations, such as varying visualizations, which may make them incompatible. When selecting a report, only other reports that are combinable with the selected one will appear.
-:::
-
-Only reports that match the following criteria can be combined:
-
-- Same group by
-- Same visualization. Only the following visualizations are possible to combine and will show up in the combined selection list:
- - Bar chart
- - Line chart
- - Table
- - Number
-- Same view but combining user task duration (assigned, unassigned, and total). Flow node duration reports are also possible.
-- Process definition can be different.
-- Furthermore, it is possible to combine reports grouped by start date with reports grouped by end date under the condition that the date interval is the same.
-
-The following limitations do apply to combining reports:
-
-- It is not possible to combine decision reports.
-- Distributed reports cannot be combined
-- Multi-measure reports, including reports containing multiple aggregations or multiple user task duration times, cannot be combined.
-
-You can update the name of the report, save it, and add it to a dashboard, similar to a normal report. The combined process reports will appear in the reports list alongside normal reports.
-
-### Configure combined process reports
-
-You can configure the combined process report using the cogwheel button available on the top right side of the screen.
-
-For example, in all chart reports, you can modify what is shown in the tooltips, change the axis names, and set a goal line, as illustrated in the figure below.
-
-![Configurations available for combined process reports](./img/combined-config.png)
diff --git a/optimize/components/userguide/decision-analysis/decision-analysis-overview.md b/optimize/components/userguide/decision-analysis/decision-analysis-overview.md
deleted file mode 100644
index f34bf50f4ca..00000000000
--- a/optimize/components/userguide/decision-analysis/decision-analysis-overview.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-id: decision-analysis-overview
-title: Overview
-description: Explore, discover and get insights into your decisions that otherwise would be hidden.
----
-
-Camunda 7 only
-
-Decision reports provide you with the ability to view your data from different angles and thus capture all aspects that influence your decisions, show new trends, or depict your current business state.
-
-You can also define filters which help you narrow down your view to what you are interested in.
diff --git a/optimize/components/userguide/decision-analysis/decision-filter.md b/optimize/components/userguide/decision-analysis/decision-filter.md
deleted file mode 100644
index 03714d6be9c..00000000000
--- a/optimize/components/userguide/decision-analysis/decision-filter.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-id: decision-filter
-title: Filters
-description: Narrow down your view on the decision by creating reports based on a subset of all decision evaluations.
----
-
-Camunda 7 only
-
-You can enhance your decision reports in Camunda Optimize by applying filters, similar to [process analysis filters](../process-analysis/filters.md).
-
-To refine your decision reports, utilize filters for the [evaluation date](#evaluation-date-filter) or [input and output variables](../process-analysis/variable-filters.md):
-
-![Decision Report with open filter list in Camunda Optimize](./img/report-with-filterlist-open.png)
-
-## Evaluation date filter
-
-Applying an evaluation date filter narrows down the report to consider only decision evaluations within the specified date range. Remember, only one evaluation date filter can be defined per report.
-
-You can set a fixed or relative filter, similar to [process instance date filters](../process-analysis/metadata-filters.md#date-filters). Check the process filter guide for more details.
-
-Alternatively, use your mouse to create an evaluation date filter by selecting the desired area if your report is presented as a bar or line chart.
-
-![Zooming into a section of the chart](./img/zoom-in.png)
-
-## Variable filter
-
-Utilize the input or output variable filter to focus on decisions with specific variable values. For example, you can analyze decisions where the output variable **Classification** is **budget**. Create an output variable filter, choose the **Classification** variable, and check the **budget** option.
-
-For various ways to specify value ranges based on variable types, explore the [variable filter section](../process-analysis/variable-filters.md) in the filter guide.
diff --git a/optimize/components/userguide/decision-analysis/decision-report.md b/optimize/components/userguide/decision-analysis/decision-report.md
deleted file mode 100644
index cd0ed8d780e..00000000000
--- a/optimize/components/userguide/decision-analysis/decision-report.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-id: decision-report
-title: Single report
-description: Explore, discover, and get insights into your decision evaluations.
----
-
-Camunda 7 only
-
-Decision reports provide insights into decision definitions, distinct from process reports. To create one click on **Create New** and select **Decision Report** from the dropdown on the **Collections** page.
-
-![Create a new Decision Report from the Report list page](./img/dmn_report_create.png)
-
-There are a number of different reports you can create based on decisions:
-
-## Raw data
-
-Create a raw data report to view a table listing all decision data. This is useful for detailed information on specific evaluations or exploring a decision definition with limited evaluations.
-
-- Reorder columns and sort by any column header to view a table listing all available decision data. This can come in handy if you found interesting insights in certain decision evaluations and need detailed information about those evaluations, or you are exploring a decision definition with a limited number of evaluations.
-- Configure columns and include evaluation count in edit mode. You can reorder the columns and click on any column header to sort the table by this column. Using the configuration dialog, you can also define which columns to show and whether to include the evaluation count number in the report. These settings are only available in the edit mode of the report.
-
-To create a raw data report, select **Raw Data** from the view dropdown. The other fields are filled automatically.
-
-![Decision Raw Data Table in Camunda Optimize](./img/dmn_raw_data_report.png)
-
-## Evaluation count
-
-Create reports showing how often the decision was evaluated. Depending on the group by selection, this could be either the total number of evaluations, a chart displaying how this number of evaluations developed over time, or how they were distributed across variables or rules. As always, you can define [filters](../process-analysis/filters.md) to specify which decision evaluations to include in the report. Grouping options include:
-
-### Group by: None
-
-- Displays the total evaluations for the decision definition and version.
-- Configure precision and set a goal for a progress bar.
-
-![Progress Bar visualization](./img/dmn_progress_bar.png)
-
-### Group by: Rules
-
-- Shows the decision table with additional columns indicating rule match frequency.
-- Customize display options in the configuration dialog.
-
-![Decision Table with evaluation count](./img/dmn_decision_table.png)
-
-### Group by: Evaluation date
-
-- Visualize evaluations over time as a table or chart.
-- Use filters to create powerful reports, e.g., time periods for specific output variables.
-
-![Line Chart showing decision evaluations by date](./img/dmn_date_chart.png)
-
-### Group by: Input or output variable
-
-- Group results by a chosen variable from the decision definition.
-- Visualize as a table or chart.
-
-This view option allows you to create reports that show how often the decision was evaluated.
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_date_chart.png b/optimize/components/userguide/decision-analysis/img/dmn_date_chart.png
deleted file mode 100644
index 9900a66cf7e..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_date_chart.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_decision_table.png b/optimize/components/userguide/decision-analysis/img/dmn_decision_table.png
deleted file mode 100644
index 8f35051ed95..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_decision_table.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_pie_chart.png b/optimize/components/userguide/decision-analysis/img/dmn_pie_chart.png
deleted file mode 100644
index 89fec8f7809..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_pie_chart.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_progress_bar.png b/optimize/components/userguide/decision-analysis/img/dmn_progress_bar.png
deleted file mode 100644
index adba12f322f..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_progress_bar.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_raw_data_report.png b/optimize/components/userguide/decision-analysis/img/dmn_raw_data_report.png
deleted file mode 100644
index aed0dfa35b1..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_raw_data_report.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/dmn_report_create.png b/optimize/components/userguide/decision-analysis/img/dmn_report_create.png
deleted file mode 100644
index 097b18c956e..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/dmn_report_create.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/report-with-filterlist-open.png b/optimize/components/userguide/decision-analysis/img/report-with-filterlist-open.png
deleted file mode 100644
index 124d9a3b51c..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/report-with-filterlist-open.png and /dev/null differ
diff --git a/optimize/components/userguide/decision-analysis/img/zoom-in.png b/optimize/components/userguide/decision-analysis/img/zoom-in.png
deleted file mode 100644
index a67069eae51..00000000000
Binary files a/optimize/components/userguide/decision-analysis/img/zoom-in.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/advanced-features/import-guide.md b/optimize/self-managed/optimize-deployment/advanced-features/import-guide.md
deleted file mode 100644
index 451ba66436d..00000000000
--- a/optimize/self-managed/optimize-deployment/advanced-features/import-guide.md
+++ /dev/null
@@ -1,211 +0,0 @@
----
-id: import-guide
-title: "Data import"
-description: "Shows how the import generally works and an example of import performance."
----
-
-Camunda 7 only
-
-This document provides instructions on how the import of the engine data to Optimize works.
-
-## Architecture overview
-
-In general, the import assumes the following setup:
-
-- A Camunda engine from which Optimize imports the data.
-- The Optimize backend, where the data is transformed into an appropriate format for efficient data analysis.
-- [Elasticsearch (ES)](https://www.elastic.co/guide/index.html) or [OpenSearch (OS)](https://opensearch.org/), which serves as the database that Optimize uses to persist all of its formatted data.
-
-The following depicts the setup and how the components communicate with each other:
-
-![Optimize Import Structure](img/Optimize-Structure.png)
-
-Optimize queries the engine data using a dedicated Optimize REST-API within the engine, transforms the data, and stores it in its own database such that it can be quickly and easily queried by Optimize when evaluating reports or performing analyses. The reason for having a dedicated REST endpoint for Optimize is performance: the default REST-API adds a lot of complexity to retrieve the data from the engine database, which can result in low performance for large data sets.
-
-Note the following limitations regarding the data in Optimize's database:
-
-- The data is only a near real-time representation of the engine database. This means the database may not contain the data of the most recent time frame, e.g. the last two minutes, but all the previous data should be synchronized.
-- Optimize only imports the data it needs for its analysis. The rest is omitted and won't be available for further investigation. Currently, Optimize imports:
- - The history of the activity instances
- - The history of the process instances
- - The history of variables with the limitation that Optimize only imports primitive types and keeps only the latest version of the variable
- - The history of user tasks belonging to process instances
- - The history of incidents with the exception of incidents that occurred due to the history cleanup job or a timer start event job running out of retries
- - Process definitions
- - Process definition XMLs
- - Decision definitions
- - Definition deployment information
- - Historic decision instances with input and output
- - Tenants
- - The historic identity link logs
-
-Refer to the [Import Procedure](#import-procedure) section for a more detailed description of how Optimize imports engine data.
-
-## Import performance overview
-
-This section gives an overview of how fast Optimize imports certain data sets. The purpose of these estimates is to help you evaluate whether Optimize's import performance meets your demands.
-
-It is very likely that these metrics change for different data sets because the speed of the import depends on how the data is distributed.
-
-The import is also affected by how the involved components are set up. For instance, if you deploy the Camunda engine on a different machine than Optimize and Elasticsearch/OpenSearch to provide both applications with more computation resources, the process is likely to speed up. If the Camunda engine and Optimize are physically far away from each other, the network latency might slow down the import.
-
-### Setup
-
-The following components were used for these import tests:
-
-| Component | Version |
-| ------------------ | --------------- |
-| Camunda 7 | 7.10.3 |
-| Camunda 7 Database | PostgreSQL 11.1 |
-| Elasticsearch | 6.5.4 |
-| Optimize | 2.4.0 |
-
-The Optimize configuration with the default settings was used, as described in detail in the [configuration overview](./../configuration/system-configuration.md).
-
-The following hardware specifications were used for each dedicated host
-
-- Elasticsearch:
- - Processor: 8 vCPUs\*
- - Working Memory: 8 GB
- - Storage: local 120GB SSD
-- Camunda 7:
- - Processor: 4 vCPUs\*
- - Working Memory: 4 GB
-- Camunda 7 Database (PostgreSQL):
- - Processor: 8 vCPUs\*
- - Working Memory: 2 GB
- - Storage: local 480GB SSD
-- Optimize:
- - Processor: 4 vCPUs\*
- - Working Memory: 8 GB
-
-\*one vCPU equals one single hardware hyper-thread on an Intel Xeon E5 v2 CPU (Ivy Bridge) with a base frequency of 2.5 GHz.
-
-The time was measured from the start of Optimize until the entire data import to Optimize was finished.
-
-### Large size data set
-
-This data set contains the following amount of instances:
-
-| Number of Process Definitions | Number of Activity Instances | Number of Process Instances | Number of Variable Instances | Number of Decision Definitions | Number of Decision Instances |
-| ----------------------------- | ---------------------------- | --------------------------- | ---------------------------- | ------------------------------ | ---------------------------- |
-| 21 | 123 162 903 | 10 000 000 | 119 849 175 | 4 | 2 500 006 |
-
-Here, you can see how the data is distributed over the different process definitions:
-
-![Data Distribution](img/Import-performance-diagramms-logistic_large.png)
-
-Results:
-
-- **Duration of importing the whole data set:** ~120 minutes
-- **Speed of the import:** ~1400 process instances per second during the import process
-
-### Medium size data set
-
-This data set contains the following amount of instances:
-
-| Number of Process Definitions | Number of Activity Instances | Number of Process Instances | Number of Variable Instances |
-| ----------------------------- | ---------------------------- | --------------------------- | ---------------------------- |
-| 20 | 21 932 786 | 2 000 000 | 6 913 889 |
-
-Here you can see how the data is distributed over the different process definitions:
-
-![Data Distribution](img/Import-performance-diagramms-logistic_medium.png)
-
-Results:
-
-- **Duration of importing the whole data set:** ~ 10 minutes
-- **Speed of the import:** ~1500 process instances per second during the import process
-
-## Import procedure
-
-:::note Heads up!
-Understanding the details of the import procedure is not necessary to make Optimize work. In addition, there is no guarantee that the following description is either complete or up-to-date.
-:::
-
-The following image illustrates the components involved in the import process as well as basic interactions between them:
-
-![Optimize Procedure](img/Optimize-Import-Process.png)
-
-During execution, the following steps are performed:
-
-1. [Start an import round](#start-an-import-round).
-2. [Prepare the import](#prepare-the-import).
- 1. Poll a new page
- 2. Map entities and add an import job
-3. [Execute the import](#execute-the-import).
- 1. Poll a job
- 2. Persist the new entities to the database
-
-### Start an import round
-
-The import process is automatically scheduled in rounds by the `Import Scheduler` after startup of Optimize. In each import round, multiple `Import Services` are scheduled to run, each fetches data of one specific entity type. For example, one service is responsible for importing the historic activity instances and another one for the process definitions.
-
-For each service, it is checked if new data is available. Once all entities for one import service have been imported, the service starts to back off. To be more precise, before it can be scheduled again it stays idle for a certain period of time, controlled by the "backoff" interval and a "backoff" counter. After the idle time has passed, the service can perform another try to import new data. Each round in which no new data could be imported, the counter is incremented. Thus, the backoff counter will act as a multiplier for the backoff time and increase the idle time between two import rounds. This mechanism is configurable using the following properties:
-
-```yaml
-handler:
- backoff:
- # Interval which is used for the backoff time calculation.
- initial: 1000
- # Once all pages are consumed, the import service component will
- # start scheduling fetching tasks in increasing periods of time,
- # controlled by 'backoff' counter.
- # This property sets maximal backoff interval in seconds
- max: 30
-```
-
-If you would like to rapidly update data imported into Optimize, you have to reduce this value. However, this will cause additional strain on the engine and might influence the performance of the engine if you set a low value.
-
-More information about the import configuration can be found in the [configuration section](./../configuration/system-configuration-platform-7.md).
-
-### Prepare the import
-
-The preparation of the import is executed by the `ImportService`. Every `ImportService` implementation performs several steps:
-
-#### Poll a new page
-
-The whole polling/preparation workflow of the engine data is done in pages, meaning only a limited amount of entities is fetched on each execution. For example, say the engine has 1000 historic activity instances and the page size is 100. As a consequence, the engine would be polled 10 times. This prevents running out of memory and overloading the network.
-
-Polling a new page does not only consist of the `ImportService`, but the `IndexHandler`, and the `EntityFetcher` are also involved. The following image depicts how those components are connected with each other:
-
-![ImportService Polling Procedure](img/Import-Service-Polling.png)
-
-First, the `ImportScheduler` retrieves the newest index, which identifies the last imported page. This index is passed to the `ImportService` to order it to import a new page of data. With the index and the page size, the fetching of the engine data is delegated to the `EntityFetcher`.
-
-#### Map entities and add an import job
-
-All fetched entities are mapped to a representation that allows Optimize to query the data very quickly. Subsequently, an import job is created and added to the queue to persist the data in the database.
-
-### Execute the import
-
-Full aggregation of the data is performed by a dedicated `ImportJobExecutor` for each entity type, which waits for `ImportJob` instances to be added to the execution queue. As soon as a job is in the queue, the executor:
-
-- Polls the job with the new Optimize entities
-- Persists the new entities to the database
-
-The data from the engine and Optimize do not have a one-to-one relationship, i.e., one entity type in Optimize may consist of data aggregated from different data types of the engine. For example, the historic process instance is first mapped to an Optimize `ProcessInstance`. However, for the heatmap analysis it is also necessary for `ProcessInstance` to contain all activities that were executed in the process instance.
-
-Therefore, the Optimize `ProcessInstance` is an aggregation of the engine's historic process instance and other related data: historic activity instance data, user task data, and variable data are all nested documents ([ES](https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html) / [OS](https://opensearch.org/docs/latest/field-types/supported-field-types/nested/)) within Optimize's `ProcessInstance` representation.
-
-:::note
-Optimize uses nested documents ([ES](https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html) / [OS](https://opensearch.org/docs/latest/field-types/supported-field-types/nested/)), the above mentioned data is an example of documents that are nested within Optimize's `ProcessInstance` index.
-
-Elasticsearch and OpenSearch apply restrictions regarding how many objects can be nested within one document. If your data includes too many nested documents, you may experience import failures. To avoid this, you can temporarily increase the nested object limit in Optimize's [index configuration](./../configuration/system-configuration.md#index-settings). Note that this might cause memory errors.
-:::
-
-Import executions per engine entity are actually independent from another. Each follows a [producer-consumer-pattern](https://dzone.com/articles/producer-consumer-pattern), where the type specific `ImportService` is the single producer and a dedicated single `ImportJobExecutor` is the consumer of its import jobs, decoupled by a queue. So, both are executed in different threads. To adjust the processing speed of the executor, the queue size and the number of threads that process the import jobs can be configured:
-
-:::note
-Although the parameters below include `ElasticSearch` in their name, they apply to both ElasticSearch and OpenSearch installations. For backward compatibility reasons, the parameters have not been renamed.
-:::
-
-```yaml
-import:
- # Number of threads being used to process the import jobs per data type that are writing
- # data to the database.
- elasticsearchJobExecutorThreadCount: 1
- # Adjust the queue size of the import jobs per data type that store data to the database.
- # A too large value might cause memory problems.
- elasticsearchJobExecutorQueueSize: 5
-```
diff --git a/optimize/self-managed/optimize-deployment/configuration/authorization-management.md b/optimize/self-managed/optimize-deployment/configuration/authorization-management.md
deleted file mode 100644
index d26c080356e..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/authorization-management.md
+++ /dev/null
@@ -1,53 +0,0 @@
----
-id: authorization-management
-title: "Authorization management"
-description: "Define which data users are authorized to see."
----
-
-Camunda 7 only
-
-User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](components/userguide/additional-features/event-based-processes.md) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
-
-## Camunda 7 data authorizations
-
-The authorization to process or decision data, as well as tenants and user data imported from any connected Camunda REST-API, is not managed in Optimize itself but needs to be configured in the Camunda 7 and can be achieved on different levels with different options.
-
-If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/). This has the advantage that you don't need to define the authorizations several times.
-
-### Process or decision definition related authorizations
-
-You can specify which user has access to certain process or decision definitions, including data related to that definition. By that we mean the user can only see, create, edit, and delete reports to definitions they are authorized to.
-
-When defining an authorization to grant or deny access to certain definitions, the most important aspect is that you grant access on the resource type "process definition" and "decision definition". You can then relate to a specific definition by providing the definition key as resource ID or use "\*" as resource ID if you want to grant the access to all definitions. To grant access to a definition, you need to set either `ALL` or `READ_HISTORY` as permission. Both permission settings are treated equally in Optimize, so there is no difference between them.
-
-As an example, have a look how adding authorizations for process definitions could be done in Camunda Admin:
-
-![Grant Optimize Access in Admin](img/Admin-GrantDefinitionAuthorizations.png)
-
-1. The first option grants global read access for the process definition `invoice`. With this setting all users are allowed to see, update, create, and delete reports related to the process definition `invoice` in Optimize.
-2. The second option defines an authorization for a single user. The user `Kermit` can now see, update, create, and delete reports related to the process definition `invoice` in Optimize.
-3. The third option provides access on group level. All users belonging to the group `optimize-users` can see, update, create, and delete reports related to the process definition `invoice` in Optimize.
-
-It is also possible to revoke the definition authorization for specific users or groups. For instance, you can define access for all process definitions on a global scale, but exclude the `engineers` group from access reports related to the `invoice` process:
-
-![Revoke Optimize Access for group 'engineers' in Admin](img/Admin-RevokeDefinitionAuthorization.png)
-
-Decision definitions are managed in the same manner in the `Authorizations -> Decision Definition` section of the Authorizations Management of the Camunda 7.
-
-### User and Group related Authorizations
-
-To allow logged-in users to see other users and groups in Optimize (for example, to add them to a collection), they have to be granted **read** permissions for the resource type **User** as well as the resource type **Group**. Access can be granted or denied either for all users/groups or for specific user/group IDs only. This can be done in Camunda Admin as illustrated in the definitions authorization example above.
-
-## Optimize entity authorization
-
-There are entities that only exist in Camunda Optimize and authorizations to these are not managed via Camunda Admin but within Optimize.
-
-### Collections
-
-[Collections](components/userguide/collections-dashboards-reports.md) are the only way to share Camunda Optimize reports and dashboards with other users. Access to them is directly managed via the UI of collections; see the corresponding user guide section on [Collection - User Permissions](components/userguide/collections-dashboards-reports.md#user-permissions).
-
-### Event-based processes
-
-Camunda 7 only
-
-Although [event-based processes](components/userguide/additional-features/event-based-processes.md) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](components/userguide/additional-features/event-based-processes.md#event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](components/userguide/additional-features/event-based-processes.md) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](components/userguide/additional-features/event-based-processes.md) either directly when [publishing](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) option.
diff --git a/optimize/self-managed/optimize-deployment/configuration/clustering.md b/optimize/self-managed/optimize-deployment/configuration/clustering.md
deleted file mode 100644
index 752e83f08fc..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/clustering.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-id: clustering
-title: "Clustering"
-description: "Read about how to run Optimize in a cluster."
----
-
-This document describes the set up of a Camunda Optimize cluster which is mainly useful in a failover scenario, but also provides means of load-balancing in terms of distributing import and user load.
-
-## Configuration
-
-There are two configuration requirements to address in order to operate Camunda Optimize successfully in a cluster scenario.
-Both of these aspects are explained in detail in the following subsections.
-
-### 1. Import - define importing instance
-
-Camunda 7 only
-
-It is important to configure the cluster in the sense that only one instance at a time is actively importing from a particular Camunda 7 engine.
-
-:::note Warning
-If more than one instance is importing data from one and the same Camunda 7 engine concurrently, inconsistencies can occur.
-:::
-
-The configuration property [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) allows to disable the import from a particular configured engine.
-
-Given a simple failover cluster consisting of two instances connected to one engine, the engine configurations in the `environment-config.yaml` would look like the following:
-
-**Instance 1 (import from engine `default` enabled):**
-
-```
-...
-engines:
- 'camunda-bpm':
- name: default
- rest: 'http://localhost:8080/engine-rest'
- importEnabled: true
-
-historyCleanup:
- processDataCleanup:
- enabled: true
- decisionDataCleanup:
- enabled: true
-...
-```
-
-**Instance 2 (import from engine `camunda-bpm` disabled):**
-
-```
-...
-engines:
- 'camunda-bpm':
- name: default
- rest: 'http://localhost:8080/engine-rest'
- importEnabled: false
-...
-```
-
-:::note
-The importing instance has the [history cleanup enabled](./system-configuration.md#history-cleanup-settings). It is strongly recommended all non-importing Optimize instances in the cluster do not enable history cleanup to prevent any conflicts when the [history cleanup](../history-cleanup/) is performed.
-:::
-
-### 1.1 Import - event-based process import
-
-Camunda 7 only
-
-In the context of event-based process import and clustering, there are two additional configuration properties to consider carefully.
-
-One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](components/userguide/additional-features/event-based-processes.md). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
-
-[`eventBasedProcess.eventImport.enabled`](./setup-event-based-processes.md) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
-
-### 2. Distributed user sessions - configure shared secret token
-
-If more than one Camunda Optimize instance are accessible by users for e.g. a failover scenario a shared secret token needs to be configured for all the instances.
-This enables distributed sessions among all instances and users do not lose their session when being routed to another instance.
-
-The relevant configuration property is [`auth.token.secret`](./system-configuration.md#security) which needs to be configured in the `environment-configuration.yaml` of each Camunda Optimize instance that is part of the cluster.
-
-It is recommended to use a secret token with a length of at least 64 characters generated using a sufficiently good random number generator, for example the one provided by `/dev/urandom` on Linux systems.
-
-The following example command would generate a 64-character random string:
-
-```
-< /dev/urandom tr -dc A-Za-z0-9 | head -c64; echo
-```
-
-The corresponding `environment-config.yaml` entry would look the **same for all instances of the cluster**:
-
-```
-auth:
- token:
- secret: ''
-```
-
-## Example setup
-
-The tiniest cluster setup consisting of one importing instance from a given `default` engine and another instance where the import is disabled would look like the following:
-
-![Two Optimize instances](./img/Optimize-Clustering.png)
-
-The HTTP/S Load-Balancer would route user requests to either of the two instances, while Optimize #1 would also care about importing data from the engine to the shared
-Elasticsearch instance/cluster and Optimize #2 only accesses the engine in order to authenticate and authorize users.
diff --git a/optimize/self-managed/optimize-deployment/configuration/common-problems.md b/optimize/self-managed/optimize-deployment/configuration/common-problems.md
index dbda43dfbfa..a30a775af04 100644
--- a/optimize/self-managed/optimize-deployment/configuration/common-problems.md
+++ b/optimize/self-managed/optimize-deployment/configuration/common-problems.md
@@ -8,7 +8,7 @@ This section aims to provide initial help to troubleshoot common issues. This gu
## Optimize is missing some or all definitions
-It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](./authorization-management.md#process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
+It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](##process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
Another common cause for this type of problem are issues with Optimize's data import, for example due to underlying problems with the engine data. In this case, the Optimize logs should contain more information on what is causing Optimize to not import the definition data correctly. If you are unsure on how to interpret what you find in the logs, create a support ticket.
@@ -24,7 +24,9 @@ This often occurs when Elasticsearch is running out of disk space. If this is th
## Exception indicating an error while checking the engine version
-The most common cause for this issue is that the engine endpoint Optimize uses is not configured correctly. Check your [configuration](./system-configuration-platform-7.md) and ensure the engine REST URL is set correctly.
+
+
+The most common cause for this issue is that the engine endpoint Optimize uses is not configured correctly. Check your [configuration](#) and ensure the engine REST URL is set correctly.
## Server language results in UI/server errors
@@ -34,7 +36,6 @@ When Optimize is running with its language set to one with characters that it ca
Always check the migration and update instructions for the version you are migrating from:
-- For Camunda 7, refer to the [Camunda 7 migration guide](./../migration-update/camunda-8/instructions.md).
- For Camunda 8, refer to the [Camunda 8 migration guide](./../migration-update/camunda-8/instructions.md).
These guides often document known issues along with their solutions, which might already address the problem you're encountering.
diff --git a/optimize/self-managed/optimize-deployment/configuration/event-based-processes.md b/optimize/self-managed/optimize-deployment/configuration/event-based-processes.md
deleted file mode 100644
index efc9f48f2f4..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/event-based-processes.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-id: event-based-process-configuration
-title: "Event-based process system configuration"
-description: "How to configure event-based processes in Optimize."
----
-
-Camunda 7 only
-
-Configuration of the Optimize event-based process feature.
-
-| YAML path | Default value | Description |
-| -------------------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| eventBasedProcess.authorizedUserIds | [ ] | A list of userIds that are authorized to manage (Create, Update, Publish & Delete) event based processes. |
-| eventBasedProcess.authorizedGroupIds | [ ] | A list of groupIds that are authorized to manage (Create, Update, Publish & Delete) event based processes. |
-| eventBasedProcess.eventImport.enabled | false | Determines whether this Optimize instance performs event based process instance import. |
-| eventBasedProcess.eventImport.maxPageSize | 5000 | The batch size of events being correlated to process instances of event based processes. |
-| eventBasedProcess.eventIndexRollover.scheduleIntervalInMinutes | 10 | The interval in minutes at which to check whether the conditions for a rollover of eligible indices are met, triggering one if required. This value should be greater than 0. |
-| eventBasedProcess.eventIndexRollover.maxIndexSizeGB | 50 | Specifies the maximum total index size for events (excluding replicas). When shards get too large, query performance can slow down and rolling over an index can bring an improvement. Using this configuration, a rollover will occur when triggered and the current event index size matches or exceeds the maxIndexSizeGB threshold. |
-
-## Event ingestion REST API configuration
-
-Camunda 7 only
-
-Configuration of the Optimize [event ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
-
-| YAML path | Default value | Description |
-| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| eventBasedProcess.eventIngestion.maxBatchRequestBytes | 10485760 | Content length limit for an ingestion REST API bulk request in bytes. Requests will be rejected when exceeding that limit. Defaults to 10MB. In case this limit is raised you should carefully tune the heap memory accordingly, see Adjust Optimize heap size on how to do that. |
-| eventBasedProcess.eventIngestion.maxRequests | 5 | The maximum number of event ingestion requests that can be serviced at any given time. |
diff --git a/optimize/self-managed/optimize-deployment/configuration/getting-started.md b/optimize/self-managed/optimize-deployment/configuration/getting-started.md
index e013f0f040b..d0114b3a6c2 100644
--- a/optimize/self-managed/optimize-deployment/configuration/getting-started.md
+++ b/optimize/self-managed/optimize-deployment/configuration/getting-started.md
@@ -16,12 +16,6 @@ Refer to the [configuration section on container settings](./system-configuratio
You can customize the [Elasticsearch/OpenSearch connection settings](./system-configuration.md#connection-settings) as well as the [index settings](./system-configuration.md#index-settings).
-## Camunda 7 configuration
-
-Camunda 7 only
-
-To perform an import and provide the full set of features, Optimize requires a connection to the REST API of the Camunda engine. For details on how to configure the connection to the Camunda 7, refer to the [Camunda 7 configuration section](./system-configuration-platform-7.md).
-
## Camunda 8 specific configuration
For Camunda 8, Optimize is importing process data from exported zeebe records as created by the [Zeebe Elasticsearch Exporter](https://github.com/camunda/camunda/tree/main/zeebe/exporters/elasticsearch-exporter) (or [Zeebe OpenSearch Exporter](https://github.com/camunda/camunda/tree/main/zeebe/exporters/opensearch-exporter)) from the same cluster that Optimize used to store its own data. For the relevant configuration options, refer to the [Camunda 8 import configuration](./system-configuration-platform-8.md).
diff --git a/optimize/self-managed/optimize-deployment/configuration/history-cleanup.md b/optimize/self-managed/optimize-deployment/configuration/history-cleanup.md
index 773b1824d82..e7dbe7abd28 100644
--- a/optimize/self-managed/optimize-deployment/configuration/history-cleanup.md
+++ b/optimize/self-managed/optimize-deployment/configuration/history-cleanup.md
@@ -18,12 +18,6 @@ There are four types of history cleanup:
By default, all four types of history cleanup are disabled. They can be enabled individually by config and the cleanup is applied accordingly.
-:::note Note for Camunda 7 users
-By default, history cleanup is disabled in Optimize when running in Camunda 7. Before enabling it, you should consider the type of cleanup and time to live period that fits to your needs. Otherwise, historic data intended for analysis might get lost irreversibly.
-
-The default [engine history cleanup](https://docs.camunda.org/manual/latest/user-guide/process-engine/history/#history-cleanup) in Camunda 7 works differently than the one in Optimize due to the possible cleanup strategies. The current implementation in Optimize is equivalent to the [end time strategy](https://docs.camunda.org/manual/latest/user-guide/process-engine/history/#end-time-based-strategy) of the Engine.
-:::
-
## Setup
The most important settings are `cronTrigger` and `ttl`; their global default configuration is the following:
@@ -101,7 +95,7 @@ historyCleanup:
-The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.
+The age of ingested event data is determined by the [`time`](##request-body) field provided for each event at the time of ingestion.
To enable the cleanup of event data, the `historyCleanup.ingestedEventCleanup.enabled` property needs to be set to `true`.
@@ -112,8 +106,10 @@ historyCleanup:
enabled: true
```
+
+
:::note
-The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
+The ingested event cleanup does not cascade down to potentially existing [event-based processes](#) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::
diff --git a/optimize/self-managed/optimize-deployment/configuration/license.md b/optimize/self-managed/optimize-deployment/configuration/license.md
deleted file mode 100644
index 8a54c8dd485..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/license.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-id: optimize-license
-title: "Optimize license key"
-description: "When you log in to Optimize for the first time, you are redirected to the license page where you can enter your license key."
----
-
-Camunda 7 only
-
-When you log in to Optimize for the first time, you are redirected to the license page. Here, enter your license key to be able to use Camunda Optimize.
-
-![Optimize license page with no license key in the text field and submit button below](img/license-guide.png)
-
-Alternatively, you can add a file with the license key to the path `${optimize-root-folder}/config/OptimizeLicense.txt`; it will be automatically loaded to the database unless it already contains a license key.
-
-If you are using the Optimize Docker images and want Optimize to automatically recognize your license key, refer to the [installation guide](../../install-and-start#license-key-file) on how to achieve this.
diff --git a/optimize/self-managed/optimize-deployment/configuration/multi-tenancy.md b/optimize/self-managed/optimize-deployment/configuration/multi-tenancy.md
index 87ca00f8fc8..e2bcd2251ca 100644
--- a/optimize/self-managed/optimize-deployment/configuration/multi-tenancy.md
+++ b/optimize/self-managed/optimize-deployment/configuration/multi-tenancy.md
@@ -4,7 +4,7 @@ title: "Multi-tenancy"
description: "Learn about the supported multi-tenancy scenarios."
---
-Camunda 7 and Camunda 8 Self-Managed only
+Camunda 8 Self-Managed only
Multi-tenancy in the context of Camunda 8 refers to the ability of Camunda 8 to serve multiple distinct [tenants]($docs$/self-managed/identity/user-guide/tenants/managing-tenants/) or
clients within a single installation.
@@ -46,50 +46,3 @@ If required, the tenant authorization cache in Optimize can also be configured v
To ensure seamless integration and functionality, the multi-tenancy feature must also be enabled across **all** associated components [if not configured in Helm]($docs$/self-managed/concepts/multi-tenancy/) so users can view any data from tenants for which they have authorizations configured in Identity.
Find more information (including links to individual component configuration) on the [multi-tenancy concepts page]($docs$/self-managed/concepts/multi-tenancy/).
-
-## Possible Camunda 7 multi-tenancy scenarios
-
-As described in the [Camunda 7 documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/multi-tenancy/), there are two possible multi-tenant scenarios which are also supported by Optimize: [Single Camunda 7 process engine with tenant-identifiers](#single-camunda-7-process-engine-with-tenant-identifiers) and [One Camunda 7 process engine per tenant](#one-camunda-7-process-engine-per-tenant).
-
-## Single Camunda 7 process engine with tenant-identifiers
-
-Tenant-identifiers available in the Camunda 7 engine are automatically imported into Optimize and tenant-based access authorization is enforced based on the configured `Tenant Authorizations` within the Camunda 7. This means there is no additional setup required for Optimize in order to support this multi-tenancy scenario.
-
-Users granted tenant access via the Camunda 7 will be able to create and see reports for that particular tenant in Optimize. In the following screenshot, the user `demo` is granted access to data of the tenant with the id `firstTenant` and will be able to select that tenant in the report builder. Other users, without the particular firstTenant authorization, will not be able to select that tenant in the report builder nor be able to see results of reports that are based on that tenant.
-
-![Tenant Authorization](img/admin-tenant-authorization.png)
-
-## One Camunda 7 process engine per tenant
-
-In the case of a multi-engine scenario where tenant-specific data is isolated by deploying to dedicated engines, there are no tenant identifiers present in the particular engines themselves. For a single Optimize instance that is configured to import from each of those engines to support this scenario, it is required to configure a `defaultTenant` for each of those engines.
-
-The effect of configuring a `defaultTenant` per engine is that all data records imported from the particular engine where no engine-side tenant identifier is present this `defaultTenant` will be added automatically. Optimize users will be authorized to those default tenants based on whether they are authorized to access the particular engine the data originates from. So in this scenario, it is not necessary to configure any `Tenant Authorizations` in the Camunda 7 itself.
-
-The following `environment-config.yaml` configuration snippet illustrates the configuration of this `defaultTenant` on two different engines.
-
-```
-...
-engines:
- "engineTenant1":
- name: engineTenant1
- defaultTenant:
- # the id used for this default tenant on persisted entities
- id: tenant1
- # the name used for this tenant when displayed in the UI
- name: First Tenant
- ...
- "engineTenant2":
- name: engineTenant2
- defaultTenant:
- # the id used for this default tenant on persisted entities
- id: tenant2
- # the name used for this tenant when displayed in the UI
- name: Second Tenant
-...
-```
-
-Optimize users who have a `Optimize Application Authorization` on both engines will be able to distinguish between data of both engines by selecting the corresponding tenant in the report builder.
-
-:::note Heads up!
-Once a `defaultTenant.id` is configured and data imported, you cannot change it anymore without doing a [full reimport](./../migration-update/camunda-7/instructions.md#force-reimport-of-engine-data-in-optimize) as any changes to the configuration cannot be applied to already imported data records.
-:::
diff --git a/optimize/self-managed/optimize-deployment/configuration/multiple-engines.md b/optimize/self-managed/optimize-deployment/configuration/multiple-engines.md
deleted file mode 100644
index 10fb4c5074f..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/multiple-engines.md
+++ /dev/null
@@ -1,114 +0,0 @@
----
-id: multiple-engines
-title: "Multiple process engines"
-description: "Learn how to set up multiple process engines with Optimize and which scenarios are supported."
----
-
-Camunda 7 only
-
-Learn how to set up multiple process engines with Optimize and which scenarios are supported.
-
-## Possible multiple process engine scenarios
-
-There are two possible setups where multiple process engines can be used:
-
-- [Possible multiple process engine scenarios](#possible-multiple-process-engine-scenarios)
- - [Multiple engines with distributed databases](#multiple-engines-with-distributed-databases)
- - [Multiple engines with a shared database](#multiple-engines-with-a-shared-database)
-- [Authentication and authorization in the multiple engine setup](#authentication-and-authorization-in-the-multiple-engine-setup)
-
-Check which scenario corresponds to your setup because the configuration of multiple engines to Optimize is not always suited for the best import performance.
-
-:::note Heads Up!
-
-There are two restrictions for the multiple engines feature:
-
-1. The process engines are assumed to have distinct process definitions, which means that one process definition (same key, tenant and version) is not deployed on two or more engines at the same time.
- Alternatively, each engine could be configured with default tenant identifiers as described in the [One Tenant Per Engine Scenario](../multi-tenancy/#one-process-engine-per-tenant).
-2. The engines are assumed to have distinct tenant identifiers, which means one particular tenantId is not deployed on two or more engines at the same time.
-
-:::
-
-### Multiple engines with distributed databases
-
-In this scenario, you have multiple process engines and each engine has its own database as illustrated in the following diagram:
-
-![Clustered Engine with distributed Database](img/Clustered-Engine-Distributed-Database.png)
-
-Now, you are able to connect each engine to Optimize. The data will then automatically be imported into Optimize. The following diagram depicts the setup:
-
-![Multiple Engines connected to Optimize, each having its own Database](img/Multiple-Engine-Distributed-Database.png)
-
-To set up the connections to the engines, you need to add the information to the [configuration file](./system-configuration-platform-7.md). For the sake of simplicity, let's assume we have two microservices, `Payment` and `Inventory`, each having their own engine with its own database and processes. Both are accessible in the local network. The `Payment` engine has the port `8080` and the `Inventory` engine the port `1234`. Now an excerpt of the configuration could look as follows:
-
-```yaml
-engines:
- payment:
- name: default
- rest: http://localhost:8080/engine-rest
- authentication:
- enabled: false
- password: ""
- user: ""
- enabled: true
- inventory:
- name: default
- rest: http://localhost:1234/engine-rest
- authentication:
- enabled: false
- password: ""
- user: ""
- enabled: true
-```
-
-`payment` and `inventory` are custom names that were chosen to distinguish where the data was originally imported from later on.
-
-### Multiple engines with a shared database
-
-In this scenario you have multiple engines distributed in a cluster, where each engine instance is connected to a shared database. See the following diagram for an illustration:
-
-![Clustered Engine with shared Database](img/Clustered-Engine-Shared-Database.png)
-
-Now it could be possible to connect each engine to Optimize. Since every engine accesses the same data through the shared database, Optimize would import the engine data multiple times. There is also no guarantee that importing the same data multiple times will not cause any data corruption. For this reason we do not recommend using the setup from [multiple engines with distributed databases](#multiple-engines-with-distributed-databases).
-
-In the scenario of multiple engines with a shared database, it might make sense to balance the work load on each engine during the import. You can place a load balancer between the engines and Optimize, which ensures that the data is imported only once and the load is distributed among all engines. Thus, Optimize would only communicate to the load balancer. The following diagram depicts the described setup:
-
-![Multiple Engines with shared Database connected to Optimize](img/Multiple-Engine-Shared-Database.png)
-
-In general, tests have shown that Optimize puts a very low strain on the engine and its impact on the engine's operations are in almost all cases neglectable.
-
-## Authentication and authorization in the multiple engine setup
-
-When you configure multiple engines in Optimize, each process engine can host different users with a different set of authorizations. If a user is logging in, Optimize will try to authenticate and authorize the user on each configured engine. In case you are not familiar with how
-the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](./authorization-management.md) documentation first.
-
-To determine if a user is allowed to log in and which resources they are allowed to access within the multiple engine scenario, Optimize uses the following algorithm:
-
-_Given the user X logs into Optimize, go through the list of configured engines and try to authenticate the user X, for each successful authentication fetch the permissions of X for applications and process definitions from that engine and allow X to access Optimize if authorized by at least one engine._
-
-To give you a better understanding of how that works, let's take the following multiple engine scenario:
-
-```
-- Engine `payment`:
- - User without Optimize Application Authorization: Scooter, Walter
- - User with Optimize Application Authorization: Gonzo
- - Authorized Definitions for Gonzo, Scooter, Walter: Payment Processing
-
-- Engine `inventory`:
- - User with Optimize Application Authorization: Piggy, Scooter
- - Authorized Definitions for Piggy, Scooter: Inventory Checkout
-
-- Engine `order`:
- - User with Optimize Application Authorization: Gonzo
- - Authorized Definitions for Gonzo: Order Handling
-
-```
-
-Here are some examples that might help you to understand the authentication/authorization procedure:
-
-- If `Piggy` logged in to Optimize, she would be granted access to Optimize and can create reports for the definition `Inventory Checkout`.
-- If `Rizzo` logged in to Optimize, he would be rejected because the user `Rizzo` is not known to any engine.
-- If `Walter` logged in to Optimize, he would be rejected despite being authorized to access the definition `Payment Processing` on engine `payment` because `Walter` does not have the `Optimize Application Authorization` required to access Optimize.
-- If `Scooter` logged in to Optimize, he would be granted access to Optimize and can create reports for the definition `Inventory Checkout`. He wouldn't
- get permissions for the `Payment Processining` or the `Order Handling` definition, since he doesn't have Optimize permissions on the `payment` or `order` engine.
-- If `Gonzo` logged in to Optimize, he would be granted access to Optimize and can create reports for the definition `Payment Processining` as well as the `Order Handling` definition, since definitions authorizations are loaded from all engines the user could be authenticated with (in particular `payment` and `order`).
diff --git a/optimize/self-managed/optimize-deployment/configuration/object-variables.md b/optimize/self-managed/optimize-deployment/configuration/object-variables.md
index 2819e377311..b13507f6742 100644
--- a/optimize/self-managed/optimize-deployment/configuration/object-variables.md
+++ b/optimize/self-managed/optimize-deployment/configuration/object-variables.md
@@ -27,13 +27,11 @@ Similarly, the "contains" filter matches process instances whose list variable c
The value of list properties within objects as well as variables which are lists of objects rather than primitives can be inspected in the raw object variable value column accessible in raw data reports.
-## Variable plugins
-
-Any configured [variable plugins](../../plugins/variable-import-plugin) are applied _before_ Optimize creates the flattened property "sub variables", meaning the configured plugins have access to the raw JSON object variables only. Any modifications applied to the JSON object variables will then be persisted to the "sub variables" when Optimize flattens the resulting objects in the next step of the import cycle.
-
## Optimize configuration
-The import of object variable values is enabled by default and can be disabled using the `import.data.variable.includeObjectVariableValue` [configuration](./system-configuration-platform-7.md).
+
+
+The import of object variable values is enabled by default and can be disabled using the `import.data.variable.includeObjectVariableValue` [configuration](#).
## Other system configurations
diff --git a/optimize/self-managed/optimize-deployment/configuration/security-instructions.md b/optimize/self-managed/optimize-deployment/configuration/security-instructions.md
index b0ff9980154..13a220fd622 100644
--- a/optimize/self-managed/optimize-deployment/configuration/security-instructions.md
+++ b/optimize/self-managed/optimize-deployment/configuration/security-instructions.md
@@ -11,26 +11,13 @@ This page provides an overview of how to secure a Camunda Optimize installation.
This guide also identifies areas where we consider security issues to be relevant for the Camunda Optimize product and list those in the subsequent sections. Compliance for those areas is ensured based on common industry best practices and influenced by security requirements of standards like OWASP Top 10 and others.
-
-
-
-Camunda 7 only
-
-:::note Important!
-Optimize does not operate on its own, but needs the Camunda 7 engine to import the data from and Elasticsearch to store the data. A detailed description of the setup can be found in the [architecture overview](../advanced-features/import-guide.md) guide.
-:::
-
-The BPMN with its process engine is a full standalone application which has a dedicated [security](https://docs.camunda.org/manual/latest/user-guide/security/) guide. The sections that are of major importance for the communication with Optimize are: [enabling authentication for the REST API](https://docs.camunda.org/manual/latest/user-guide/security/#enabling-authentication-for-the-rest-api/#enabling-authentication-for-the-rest-api) and [enabling SSL/HTTPS](https://docs.camunda.org/manual/latest/user-guide/security/#enabling-authentication-for-the-rest-api).
-
-
-
Optimize already comes with a myriad of settings and security mechanism by default. In the following you will find the parts that still need manual adjustments.
@@ -45,18 +32,6 @@ Over time, various client-side security mechanisms have been developed to protec
Optimize adds several of these headers which can be fine-tuned in the [configuration](./system-configuration.md#security) to ensure appropriate security.
-## Authentication
-
-Camunda 7 only
-
-Authentication controls who can access Optimize. Read all about how to restrict the application access in the [user access management guide](./user-management.md).
-
-## Authorization
-
-Camunda 7 only
-
-Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](./authorization-management.md).
-
diff --git a/optimize/self-managed/optimize-deployment/configuration/setup-event-based-processes.md b/optimize/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
deleted file mode 100644
index 7109c5567f5..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-id: setup-event-based-processes
-title: "Event-based processes"
-description: "Read everything about how to configure event-based processes in Optimize."
----
-
-Camunda 7 only
-
-Event-based processes are BPMN processes that can be created inside Optimize which are based on events originating from external systems.
-
-Event ingestion is the process of sending event data from external systems to Camunda Optimize to support business processes that are not fully automated with Camunda 7 yet.
-Based on this data, it is possible to create process models inside Optimize - called event-based processes - that can be used in reports.
-
-To enable this feature, refer to [event-based process configuration](#event-based-process-configuration).
-
-## Event based process configuration
-
-To make use of ingested events and create event-based process mappings for them, the event-based process feature needs to be enabled in the [Optimize configuration](./system-configuration.md).
-
-This also includes authorizing particular users by their userId or user groups by their groupId to be able to create so-called event-based processes that can be used by other users of Optimize once published.
-
-A full configuration example authorizing the user `demo` and all members of the `sales` user group to manage event-based processes, enabling the event-based process import as well as configuring a [Public API](./system-configuration.md#public-api) accessToken with the value `secret`, would look like the following:
-
- api:
- accessToken: secret
-
- eventBasedProcess:
- authorizedUserIds: ['demo']
- authorizedGroupIds: ['sales']
- eventImport:
- enabled: true
-
-## Use Camunda activity event sources for event based processes
-
-:::note Authorization to event-based processes
-When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) or at any time via the [Edit Access Option](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) in the event-based process List.
-
-Visit [Authorization Management - event-based process](./authorization-management.md#event-based-processes) for the reasoning behind this behavior.
-:::
-
-To publish event-based processes that include [Camunda Event Sources](components/userguide/additional-features/event-based-processes.md#camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
-
-:::note Heads Up!
-You need to [reimport data](./../migration-update/camunda-7/instructions.md#force-reimport-of-engine-data-in-optimize) from this engine to have all historic Camunda events available for event-based processes. Otherwise, only new events will be included.
-:::
-
-As an example, in order to be able to create event processes based on Camunda events from the configured engine named `camunda-bpm`, the configuration of that engine needs to have the `importEnabled` configuration property as well as the `eventImportEnabled` set to `true`:
-
- engines:
- 'camunda-bpm':
- importEnabled: true
- eventImportEnabled: true
diff --git a/optimize/self-managed/optimize-deployment/configuration/system-configuration-platform-7.md b/optimize/self-managed/optimize-deployment/configuration/system-configuration-platform-7.md
deleted file mode 100644
index 2144aa97aaf..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/system-configuration-platform-7.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-id: system-configuration-platform-7
-title: "Camunda 7 system configuration"
-description: "Configuration for engines used to import data."
----
-
-Configuration for engines used to import data.
-
-:::note
-You have to have
-at least one engine configured at all times. You can configure multiple engines
-to import data from. Each engine configuration should have a unique alias associated
-with it and represented by `${engineAlias}`.
-:::
-
-Note that each connected engine must have its respective history level set to `FULL` in order to see all available data
-in Optimize. Using any other history level will result in less data and/or functionality within Optimize. Furthermore,
-history in a connected engine should be configured for long enough for Optimize to import it. If data is removed from an
-engine before Optimize has imported it, that data will not be available in Optimize.
-
-| YAML path | Default value | Description |
-| ----------------------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| engines.$\{engineAlias}.name | default | The process engine's name on the platform, this is the unique engine identifier on the platforms REST API. |
-| engines.$\{engineAlias}.defaultTenant.id | null | A default tenantID to associate all imported data with if there is no tenant configured in the engine itself. This property is only relevant in the context of a `One Process Engine Per Tenant` tenancy. For details consult the Multi-Tenancy documentation. |
-| engines.$\{engineAlias}.defaultTenant.name | null | The name used for this default tenant when displayed in the UI. |
-| engines.$\{engineAlias}.excludeTenant | [ ] | Comma-separated list of tenant IDs to be excluded when importing data from the specified engine. When left empty, data from all tenants will be imported. Please note that the `defaultTenant` cannot be excluded (and therefore also not the entities with `null` as tenant) |
-| engines.$\{engineAlias}.rest | http://localhost:8080/engine-rest | A base URL that will be used for connections to the Camunda Engine REST API. |
-| engines.$\{engineAlias}.importEnabled | true | Determines whether this instance of Optimize should import definition & historical data from this engine. |
-| engines.$\{engineAlias}.eventImportEnabled | false | Determines whether this instance of Optimize should convert historical data to event data usable for event based processes. |
-| engines.$\{engineAlias}.authentication.enabled | false | Toggles basic authentication on or off. When enabling basic authentication, please be aware that you also need to adjust the values of the user and password. |
-| engines.$\{engineAlias}.authentication.user | | When basic authentication is enabled, this user is used to authenticate against the engine.
Note: when enabled, it is required that the user has - `READ` & `READ_HISTORY` permission on the Process and Decision Definition resources
- `READ` permission on _all_ ("\*") Authorization, Group, User, Tenant, Deployment & User Operation Log resources
to enable users to log in and Optimize to import the engine data. |
-| engines.$\{engineAlias}.authentication.password | | When basic authentication is enabled, this password is used to authenticate against the engine. |
-| engines.$\{engineAlias}.webapps.endpoint | http://localhost:8080/camunda | Defines the endpoint where the Camunda webapps are found. This allows Optimize to directly link to the other Camunda Web Applications, e.g. to jump from Optimize directly to a dedicated process instance in Cockpit |
-| engines.$\{engineAlias}.webapps.enabled | true | Enables/disables linking to other Camunda Web Applications |
-
-## Camunda 7 common import settings
-
-Settings used by Optimize, which are common among all configured engines, such as
-REST API endpoint locations, timeouts, etc.
-
-| YAML path | Default value | Description |
-| --------------------------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| engine-commons.connection.timeout | 0 | Maximum time in milliseconds without connection to the engine that Optimize should wait until a timeout is triggered. If set to zero, no timeout will be triggered. |
-| engine-commons.read.timeout | 0 | Maximum time a request to the engine should last before a timeout triggers. A value of zero means to wait an infinite amount of time. |
-| import.data.activity-instance.maxPageSize | 10000 | Determines the page size for historic activity instance fetching. |
-| import.data.incident.maxPageSize | 10000 | Determines the page size for historic incident fetching. |
-| import.data.process-definition-xml.maxPageSize | 2 | Determines the page size for process definition XML model fetching. Should be a low value, as large models will lead to memory or timeout problems. |
-| import.data.process-definition.maxPageSize | 10000 | Determines the page size for process definition entities fetching. |
-| import.data.process-instance.maxPageSize | 10000 | Determines the page size for historic decision instance fetching. |
-| import.data.variable.maxPageSize | 10000 | Determines the page size for historic variable instance fetching. |
-| import.data.variable.includeObjectVariableValue | true | Controls whether Optimize fetches the serialized value of object variables from the Camunda Runtime REST API. By default, this is active for backwards compatibility. If no variable plugin to handle object variables is installed, it can be turned off to reduce the overhead of the variable import.
Note: Disabling the object variable value transmission is only effective with Camunda 7.15.0+. |
-| import.data.user-task-instance.maxPageSize | 10000 | Determines the page size for historic User Task instance fetching. |
-| import.data.identity-link-log.maxPageSize | 10000 | Determines the page size for historic identity link log fetching. |
-| import.data.decision-definition-xml.maxPageSize | 2 | Determines the page size for decision definition xml model fetching. Should be a low value, as large models will lead to memory or timeout problems. |
-| import.data.decision-definition.maxPageSize | 10000 | Determines the page size for decision definition entities fetching. |
-| import.data.decision-instance.maxPageSize | 10000 | Overwrites the maximum page size for historic decision instance fetching. |
-| import.data.tenant.maxPageSize | 10000 | Overwrites the maximum page size for tenant fetching. |
-| import.data.group.maxPageSize | 10000 | Overwrites the maximum page size for groups fetching. |
-| import.data.authorization.maxPageSize | 10000 | Overwrites the maximum page size for authorizations fetching. |
-| import.data.dmn.enabled | true | Determines if the DMN/decision data, such as decision definitions and instances, should be imported. |
-| import.data.user-task-worker.enabled | true | Determines if the User Task worker data, such as assignee or candidate group of a User Task, should be imported. |
-| import.data.user-task-worker.metadata.includeUserMetaData | true | Determines whether Optimize imports and displays assignee user metadata, otherwise only the user id is shown. |
-| import.data.user-task-worker.metadata.cronTrigger | `0 */3 * * *` | Cron expression for when to fully refresh the internal metadata cache, it defaults to every third hour. Otherwise deleted assignees/candidateGroups or metadata changes are not reflected in Optimize. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here. For details on the format please refer to: Cron Expression Description Spring Cron Expression Documentation |
-| import.data.user-task-worker.metadata.maxPageSize | 10000 | The max page size when multiple users or groups are iterated during the metadata refresh. |
-| import.data.user-task-worker.metadata.maxEntryLimit | 100000 | The entry limit of the cache that holds the metadata, if you need more entries you can increase that limit. When increasing the limit, keep in mind to account for that by increasing the JVM heap memory as well. Please refer to the "Adjust Optimize heap size" documentation. |
-| import.skipDataAfterNestedDocLimitReached | false | Some data can no longer be imported to a given document if its number of nested documents has reached the configured limit. Enable this setting to skip this data during import if the nested document limit has been reached. |
-| import.elasticsearchJobExecutorThreadCount\* | 1 | Number of threads being used to process the import jobs per data type that are writing data to the database. |
-| import.elasticsearchJobExecutorQueueSize\* | 5 | Adjust the queue size of the import jobs per data type that store data to the database. If the value is too large it might cause memory problems. |
-| import.handler.backoff.interval | 5000 | Interval in milliseconds which is used for the backoff time calculation. |
-| import.handler.backoff.max | 15 | Once all pages are consumed, the import scheduler component will start scheduling fetching tasks in increasing periods of time, controlled by "backoff" counter. |
-| import.handler.backoff.isEnabled | true | Tells if the backoff is enabled of not. |
-| import.indexType | import-index | The name of the import index type. |
-| import.importIndexStorageIntervalInSec | 10 | States how often the import index should be stored to the database. |
-| import.currentTimeBackoffMilliseconds | 300000 | This is the time interval the import backs off from the current tip of the time during the ongoing import cycle. This ensures that potentially missed concurrent writes in the engine are reread going back by the amount of this time interval. |
-| import.identitySync.includeUserMetaData | true | Whether to include metaData (firstName, lastName, email) when synchronizing users. If disabled only user IDs will be shown on user search and in collection permissions. |
-| import.identitySync.collectionRoleCleanupEnabled | false | Whether collection role cleanup should be performed. If enabled, users that no longer exist in the identity provider will be automatically removed from collection permissions. |
-| import.identitySync.cronTrigger | `0 */2 * * *` | Cron expression for when the identity sync should run, defaults to every second hour. You can either use the default Cron (5 fields) or the Spring Cron (6 fields) expression format here.
For details on the format please refer to: - [Cron Expression Description](https://en.wikipedia.org/wiki/Cron)
- [Spring Cron Expression Documentation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronSequenceGenerator.html)
|
-| import.identitySync.maxPageSize | 10000 | The max page size when multiple users or groups are iterated during the import. |
-| import.identitySync.maxEntryLimit | 100000 | The entry limit of the user/group search cache. When increasing the limit, keep in mind to account for this by increasing the JVM heap memory as well. Please refer to the "Adjust Optimize heap size" documentation on how to configure the heap size. |
-
-\* Although this parameter includes `ElasticSearch` in its name, it applies to both ElasticSearch and OpenSearch installations. For backward compatibility reasons, the parameter has not been renamed.
diff --git a/optimize/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize/self-managed/optimize-deployment/configuration/system-configuration.md
index e09bcdaac7b..10eeaeff5e5 100644
--- a/optimize/self-managed/optimize-deployment/configuration/system-configuration.md
+++ b/optimize/self-managed/optimize-deployment/configuration/system-configuration.md
@@ -88,8 +88,6 @@ These values control mechanisms of Optimize related security, e.g. security head
| |
| security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
| security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64-character secret. If set to `null` a random secret will be generated with each startup of Optimize. |
-| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
-| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
| security.responseHeaders.HSTS.max-age | 63072000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. This field defines the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
| security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all the site’s subdomains as well. |
| security.responseHeaders.X-XSS-Protection | 1; mode=block | This header enables the cross-site scripting (XSS) filter in your browser. Can have one of the following options:- `0`: Filter disabled.
- `1`: Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.
- `1; mode=block`: Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.
- `1; report=http://[YOURDOMAIN]/your_report_URI`: Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
|
@@ -256,23 +254,6 @@ Settings influencing the process digest feature.
| ------------------ | --------------- | -------------------------------------------------------------------- |
| digest.cronTrigger | 0 0 9 \* \* MON | Cron expression to define when enabled email digests are to be sent. |
-### Alert notification webhooks
-
-Camunda 7 only
-
-Settings for webhooks which can receive custom alert notifications. You can configure multiple webhooks which will be available to select from when creating or editing alerts. Each webhook configuration should have a unique human readable name which will appear in the Optimize UI.
-
-| YAML path | Default value | Description |
-| --------------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| webhookAlerting.webhooks.$\{webhookName}.url | | The URL of the webhook. |
-| webhookAlerting.webhooks.$\{webhookName}.headers | | A map of the headers of the request to be sent to the webhook. |
-| webhookAlerting.webhooks.$\{webhookName}.httpMethod | | The HTTP Method of the request to be sent to the webhook. |
-| webhookAlerting.webhooks.$\{webhookName}.defaultPayload | | The payload of the request to be sent to the webhook. This should include placeholder keys that allow you to define dynamic content. See [Alert Webhook Payload Placeholders](../webhooks#alert-webhook-payload-placeholders) for available values. |
-| webhookAlerting.webhooks.$\{webhookName}.proxy.enabled | | Whether an HTTP proxy should be used for requests to the webhook URL. |
-| webhookAlerting.webhooks.$\{webhookName}.proxy.host | | The proxy host to use, must be set if webhookAlerting.webhooks.$\{webhookName}.proxy.enabled = true. |
-| webhookAlerting.webhooks.$\{webhookName}.proxy.port | | The proxy port to use, must be set if webhookAlerting.webhooks.$\{webhookName}.proxy.enabled = true. |
-| webhookAlerting.webhooks.$\{webhookName}.proxy.sslEnabled | | Whether this proxy is using a secured connection (HTTPS). Must be set if webhookAlerting.webhooks.$\{webhookName}.proxy.enabled = true. |
-
### History cleanup settings
Settings for automatic cleanup of historic process/decision instances based on their end time.
diff --git a/optimize/self-managed/optimize-deployment/configuration/user-management.md b/optimize/self-managed/optimize-deployment/configuration/user-management.md
deleted file mode 100644
index 001faa9d4cc..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/user-management.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-id: user-management
-title: "User access management"
-description: "Define which users have access to Optimize."
----
-
-Camunda 7 only
-
-:::note Good to know!
-
-Providing Optimize access to a user just enables them to log in to Optimize. To be able
-to create reports, the user also needs to have permission to access the engine data. To see
-how this can be done, refer to the [Authorization Management](./authorization-management.md) section.
-:::
-
-You can use the credentials from the Camunda 7 users to access Optimize. However, for the users to gain access to Optimize, they need to be authorized. This is not done in Optimize itself, but needs to be configured in the Camunda 7 and can be achieved on different levels with different options. If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/).
-
-When defining an authorization to grant Optimize access, the most important aspect is that you grant access on resource type application with resource ID "optimize" (or "\*" if you want to grant access to all applications including Optimize). The permissions you can set, are either `ALL` or `ACCESS`. They are treated equally, so there is no difference between them.
-
-Authorizing users in admin can be done as follows:
-
-![Grant Optimize Access in Admin](img/Admin-GrantAccessAuthorizations.png)
-
-1. The first option allows access for Optimize on a global level. With this setting all users are allowed to log into Camunda Optimize.
-2. The second option defines the access for a single user. The user `Kermit` can now log into Camunda Optimize.
-3. The third option provides access on group level. All users belonging to the group `optimize-users` can log into Camunda Optimize.
-
-It is also possible to revoke the Optimize authorization for specific users or groups. For instance, you can define Optimize on a global scale, but exclude the `engineers` group:
-
-![Revoke Optimize Access for group 'engineers' in Admin](img/Admin-RevokeGroupAccess.png)
-
-When Optimize is configured to load data from multiple instances of Camunda 7, then it suffices to be granted by one instance for the user to be able to log into Optimize. Notice that, like for all authorizations, grants have precedence over revokes. That is, if there is a Camunda 7 instance that grants access to optimize to a user, the user can log in even if another instance revokes access to Optimize for this user.
diff --git a/optimize/self-managed/optimize-deployment/configuration/webhooks.md b/optimize/self-managed/optimize-deployment/configuration/webhooks.md
deleted file mode 100644
index 68966a7d3bf..00000000000
--- a/optimize/self-managed/optimize-deployment/configuration/webhooks.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-id: webhooks
-title: "Webhooks"
-description: "Read about how to configure alert notification webhooks for alerts on custom systems."
----
-
-Camunda 7 only
-
-In addition to email notifications, you can configure webhooks in Optimize to receive alert notifications on custom systems. This page describes how to set up your webhook configurations using the example of a simple Slack app.
-
-## The alert webhook configuration
-
-You can configure a list of webhooks in the Optimize configuration, see [Alert Notification Webhooks](./system-configuration.md#alert-notification-webhooks) for available configuration properties.
-
-### Alert webhook payload placeholders
-
-The webhook request body can be customized to integrate with any string encoded HTTP endpoint to your needs.
-In order to make use of certain properties of an alert, you can make use of placeholders within the payload string.
-
-| Placeholder | Sample Value | Description |
-| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| ALERT_MESSAGE | Camunda Optimize - Report Status
Alert name: Too many incidents
Report name: Count of incidents
Status: Given threshold [60.0] was exceeded. Current value: 186.0. Please check your Optimize report for more information!
http://optimize.myorg:8090/#/report/id/ | This is the full alert message that is also used in the email alert content. |
-| ALERT_NAME | Some Alert | The name given to the alert when it was created. |
-| ALERT_REPORT_LINK | http://optimize.myorg/#/report/id/ | The direct link to the report the alert is based on. |
-| ALERT_CURRENT_VALUE | 186.0 | The current value of the number report the alert is based on. |
-| ALERT_THRESHOLD_VALUE | 60.0 | The configured alert threshold value. |
-| ALERT_THRESHOLD_OPERATOR | > | The threshold operator configured for the aler |
-| ALERT_TYPE | new | The type of the alert notification. Can be one of:
`new` - the threshold was just exceeded and the alert was triggered
`reminder` - the threshold was exceeded previously already and this is a reminder notification
`resolved` - the threshold is met again and the alert is resolved |
-| ALERT_INTERVAL | 5 | The configured interval at which the alert condition is checked. |
-| ALERT_INTERVAL_UNIT | seconds | The unit for the configured alert interval. Can be one of: seconds, minutes, hours, days, weeks, months |
-
-The placeholders can be used within the `defaultPayload` property of each webhook configuration:
-
-```yaml
-webhookAlerting:
- webhooks:
- 'myWebhook':
- ...
- defaultPayload: 'The alert {{ALERT_NAME}} with the threshold of `{{ALERT_THRESHOLD_OPERATOR}}{{ALERT_THRESHOLD_VALUE}}` was triggered as *{{ALERT_TYPE}}*.'
-```
-
-### Example Webhook - Slack
-
-If your organization uses Slack, you can set up Optimize so that it can use a webhook to send alert notifications to a Slack channel of your choice.
-
-To configure the webhook in Optimize's `environment-config`, you first need to create a new Slack app for your organization's Slack workspace, as described in [Slack's own documentation here](https://api.slack.com/messaging/webhooks). You only need to follow the steps until you have your webhook URL - no need to write any code to use the webhook to post any messages, Optimize will take care of this for you. Once you have followed these steps, you can copy the Webhook URL from Slack's "Webhook URLs for Your Workspace" section into the configuration as follows:
-
-```bash
-webhookAlerting:
- webhooks:
- # Name of the webhook, must be unique.
- 'mySlackWebhook':
- # URL of the webhook which can receive alerts from Optimize
- url: 'https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX'
- # Map of the headers of the request to the sent to the webhook URL
- headers:
- 'Content-type': 'application/json'
- # HTTP Method for the webhook request
- httpMethod: 'POST'
- # The default payload structure with the alertMessagePlaceholder {{ALERT_MESSAGE}} for the alert text.
- # Optimize will replace this placeholder with the content of the alert message.
- defaultPayload: '{"text": "The alert *{{ALERT_NAME}}* was triggered as *{{ALERT_TYPE}}*, you can view the report <{{ALERT_REPORT_LINK}}|here>."}'
-```
-
-All configuration parameters are described in the [Alert Notification Webhooks Configuration Section](./system-configuration.md#alert-notification-webhooks).
-
-With this configuration, when you create an alert for a report in Optimize, `mySlackWebhook` will appear in the targets selection dropdown in the alert creation modal. Once you have selected the webhook from the dropdown and saved the alert, Optimize will send a message to the channel you have selected when creating your Slack app whenever an alert notification is triggered. The content of the message is the same as the content of the alert email notifications. One alert may send either or both email and webhook notifications.
diff --git a/optimize/self-managed/optimize-deployment/install-and-start.md b/optimize/self-managed/optimize-deployment/install-and-start.md
index a514fe0b8bf..a987b205a90 100644
--- a/optimize/self-managed/optimize-deployment/install-and-start.md
+++ b/optimize/self-managed/optimize-deployment/install-and-start.md
@@ -4,206 +4,6 @@ title: "Installation"
description: "Install and configure Optimize Self-Managed."
---
-## Camunda 8 stack
-
Please refer to the [Installation Guide]($docs$/self-managed/setup/overview/) for details on how to install Optimize as part of a Camunda 8 stack.
-## Camunda 7 Enterprise stack
-
-Camunda 7 only
-
-This document describes the installation process of the Camunda Optimize and connect it to a Camunda 7 stack, as well as various configuration possibilities available after initial installation.
-
-Before proceeding with the installation, read the article about [supported environments]($docs$/reference/supported-environments).
-
-### Local installation
-
-If you wish to run Camunda Optimize natively on your hardware you can download one of the two offered distributions and run them. Especially the demo distribution might be useful to try out Camunda Optimize the first time, it also comes with a simple demo process to explore the functionality.
-
-#### Prerequisites
-
-If you intend to run Optimize on your local machine, ensure you have a supported JRE (Java Runtime Environment) installed; best refer to the [Java Runtime]($docs$/reference/supported-environments#camunda-8-self-managed) section on which runtimes are supported.
-
-#### Demo distribution with Elasticsearch
-
-The Optimize Demo distribution comes with an Elasticsearch instance. The supplied Elasticsearch server is not customized or tuned by Camunda in any manner. It is intended to make the process of trying out Optimize as easy as possible. The only requirement in addition to the demo distribution itself is a running engine (ideally on localhost).
-
-To install the demo distribution containing Elasticsearch, download the archive with the latest version from the [download page](https://docs.camunda.org/enterprise/download/#camunda-optimize) and extract it to the desired folder. After that, start Optimize by running the script `optimize-demo.sh` on Linux and Mac:
-
-```bash
-./optimize-demo.sh
-```
-
-or `optimize-demo.bat` on Windows:
-
-```batch
-.\optimize-demo.bat
-```
-
-The script ensures that a local version of Elasticsearch is started and waits until it has become available. Then, it starts Optimize, ensures it is running, and automatically opens a tab in a browser to make it very convenient for you to try out Optimize.
-
-In case you need to start an Elasticsearch instance only, without starting Optimize (e.g. to perform a reimport), you can use the `elasticsearch-startup.sh` script:
-
-```bash
-./elasticsearch-startup.sh
-```
-
-or `elasticsearch-startup.bat` on Windows:
-
-```batch
-.\elasticsearch-startup.bat
-```
-
-#### Production distribution without a database
-
-This distribution is intended to be used in production. To install it, take the following steps:
-
-1. [Download](https://docs.camunda.org/enterprise/download/#camunda-optimize) the production archive, which contains all the required files to startup Camunda Optimize without a database.
-2. [Configure the database connection](./configuration/getting-started.md#elasticsearchopensearch-configuration) to connect to your pre-installed Elasticsearch/OpenSearch instance and [configure the Camunda 7 connection](./configuration/getting-started.md#camunda-platform-7-configuration) to connect Optimize to your running engine.
-3. Start your Optimize instance by running the script `optimize-startup.sh` on Linux and Mac:
-
-```bash
-./optimize-startup.sh
-```
-
-or `optimize-startup.bat` on Windows:
-
-```batch
-.\optimize-startup.bat
-```
-
-### Dockerized installation
-
-The Optimize Docker images can be used in production. They are hosted on our dedicated Docker registry and are available to enterprise customers who bought Optimize only. You can browse the available images in our [Docker registry](https://registry.camunda.cloud) after logging in with your credentials.
-
-Make sure to log in correctly:
-
-```
-$ docker login registry.camunda.cloud
-Username: your_username
-Password: ******
-Login Succeeded
-```
-
-After that, [configure the database connection](./configuration/getting-started.md#elasticsearchopensearch-configuration) to connect to your pre-installed Elasticsearch/OpenSearch instance and [configure the Camunda connection](./configuration/getting-started.md#camunda-platform-7-configuration) to connect Optimize to your running engine. For very simple use cases with only one Camunda Engine and one database node, you can use environment variables instead of mounting configuration files into the Docker container:
-
-#### Getting started with the Optimize Docker image
-
-##### Full local setup
-
-To start the Optimize Docker image and connect to an already locally running Camunda 7 as well as Elasticsearch instance you could run the following command:
-
-```
-docker run -d --name optimize --network host \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-If you wish to connect to an OpenSearch database instead, please make sure to additionally set the environment variable `CAMUNDA_OPTIMIZE_DATABASE` to `opensearch`.
-
-```
-docker run -d --name optimize --network host \
- -e CAMUNDA_OPTIMIZE_DATABASE=opensearch \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-##### Connect to remote Camunda 7 and database
-
-If, however, your Camunda 7 as well as Elasticsearch instance reside on a different host, you may provide their destination via the corresponding environment variables:
-
-```
-docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
- -e OPTIMIZE_CAMUNDABPM_REST_URL=http://yourCamBpm.org/engine-rest \
- -e OPTIMIZE_ELASTICSEARCH_HOST=yourElasticHost \
- -e OPTIMIZE_ELASTICSEARCH_HTTP_PORT=9200 \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-Alternatively, for OpenSearch:
-
-```
-docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
- -e OPTIMIZE_CAMUNDABPM_REST_URL=http://yourCamBpm.org/engine-rest \
- -e CAMUNDA_OPTIMIZE_DATABASE=opensearch \
- -e CAMUNDA_OPTIMIZE_OPENSEARCH_HOST=yourOpenSearchHost \
- -e CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT=9205 \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-#### Available environment variables
-
-There is only a limited set of configuration keys exposed via environment variables. These mainly serve the purpose of testing and exploring Optimize. For production configurations, we recommend following the setup in documentation on [configuration using a `environment-config.yaml` file](#configuration-using-a-yaml-file).
-
-The most important environment variables you may have to configure are related to the connection to the Camunda 7 REST API, as well as Elasticsearch/OpenSearch:
-
-- `OPTIMIZE_CAMUNDABPM_REST_URL`: The base URL that will be used for connections to the Camunda Engine REST API (default: `http://localhost:8080/engine-rest`)
-- `OPTIMIZE_CAMUNDABPM_WEBAPPS_URL`: The endpoint where to find the Camunda web apps for the given engine (default: `http://localhost:8080/camunda`)
-
-For an ElasticSearch installation:
-
-- `OPTIMIZE_ELASTICSEARCH_HOST`: The address/hostname under which the Elasticsearch node is available (default: `localhost`)
-- `OPTIMIZE_ELASTICSEARCH_HTTP_PORT`: The port number used by Elasticsearch to accept HTTP connections (default: `9200`)
-- `CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_USERNAME`: The username for authentication in environments where a secured Elasticsearch connection is configured.
-- `CAMUNDA_OPTIMIZE_ELASTICSEARCH_SECURITY_PASSWORD`: The password for authentication in environments where a secured Elasticsearch connection is configured.
-
-For an OpenSearch installation:
-
-- `CAMUNDA_OPTIMIZE_DATABASE`: The database type to connect to, in this case `opensearch` (default: `elasticsearch`)
-- `CAMUNDA_OPTIMIZE_OPENSEARCH_HOST`: The address/hostname under which the OpenSearch node is available (default: `localhost`)
-- `CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT`: The port number used by OpenSearch to accept HTTP connections (default: `9205`)
-- `CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_USERNAME`: The username for authentication in environments where a secured OpenSearch connection is configured.
-- `CAMUNDA_OPTIMIZE_OPENSEARCH_SECURITY_PASSWORD`: The password for authentication in environments where a secured OpenSearch connection is configured.
-
-A complete sample can be found within [Connect to remote Camunda 7 and database](#connect-to-remote-camunda-7-and-database).
-
-Furthermore, there are also environment variables specific to the [event-based process](components/userguide/additional-features/event-based-processes.md) feature you may make use of:
-
-- `OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED`: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default: `false`)
-- `OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS`: An array of user ids that are authorized to administer event-based processes (default: `[]`)
-- `OPTIMIZE_EVENT_BASED_PROCESSES_IMPORT_ENABLED`: Determines whether this Optimize instance performs event-based process instance import. (default: `false`)
-
-Additionally, there are also runtime related environment variables such as:
-
-- `OPTIMIZE_JAVA_OPTS`: Allows you to configure/overwrite Java Virtual Machine (JVM) parameters; defaults to `-Xms1024m -Xmx1024m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=256m`.
-
-In case you want to make use of the Optimize Public API, you can also set **one** of the following variables:
-
-- `SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI` Complete URI to get public keys for JWT validation, e.g. `https://weblogin.cloud.company.com/.well-known/jwks.json`. For more details see [public API authentication](../../apis-tools/optimize-api/optimize-api-authentication.md).
-- `OPTIMIZE_API_ACCESS_TOKEN` secret static shared token to be provided to the secured REST API on access in the authorization header. Will
- be ignored if `SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI` is also set. For more details see [public API
- authentication](../../apis-tools/optimize-api/optimize-api-authentication.md).
-
-You can also adjust logging levels using environment variables as described in the [logging configuration](./configuration/logging.md).
-
-#### License key file
-
-If you want the Optimize Docker container to automatically recognize your [license key file](./configuration/license.md), you can use standard [Docker means](https://docs.docker.com/storage/volumes/) to make the file with the license key available inside the container. Replacing the `{{< absolutePathOnHostToLicenseFile >}}` with the absolute path to the license key file on your host can be done with the following command:
-
-```
-docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
- -v {{< absolutePathOnHostToLicenseFile >}}:/optimize/config/OptimizeLicense.txt:ro \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-#### Configuration using a yaml file
-
-In a production environment, the limited set of [environment variables](#available-environment-variables) is usually not enough so that you want to prepare a custom `environment-config.yaml` file. Refer to the [Configuration](./configuration/system-configuration.md) section of the documentation for the available configuration parameters.
-
-You need to mount this configuration file into the Optimize Docker container to apply it. Replacing the `{{< absolutePathOnHostToConfigurationFile >}}` with the absolute path to the `environment-config.yaml` file on your host can be done using the following command:
-
-```
-docker run -d --name optimize -p 8090:8090 -p 8091:8091 \
- -v {{< absolutePathOnHostToConfigurationFile >}}:/optimize/config/environment-config.yaml:ro \
- registry.camunda.cloud/optimize-ee/optimize:{{< currentVersionAlias >}}
-```
-
-In managed Docker container environments like [Kubernetes](https://kubernetes.io/), you may set this up using [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/).
-
-### Usage
-
-You can start using Optimize right away by opening the following URL in your browser: [http://localhost:8090](http://localhost:8090)
-
-Then, you can use the users from the Camunda 7 to log in to Optimize. For details on how to configure the user access, consult the [user access management](./configuration/user-management.md) section.
-
-## Next steps
-
-To get started configuring the Optimize web container, Elasticsearch/OpenSearch, Camunda 7, Camunda 8, and more, visit the [getting started section](./configuration/getting-started.md) of our configuration documentation.
+To get started configuring the Optimize web container, Elasticsearch/OpenSearch, Camunda 8, and more, visit the [getting started section](./configuration/getting-started.md) of our configuration documentation.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.1-to-2.2.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.1-to-2.2.md
deleted file mode 100644
index c7f9665f08a..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.1-to-2.2.md
+++ /dev/null
@@ -1,27 +0,0 @@
----
-id: 2.1-to-2.2
-title: "Update notes (2.1 to 2.2)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.2.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-When updating Optimize, certain features might not work out of the box for the old data. This is because old versions of Optimize
-do not fetch data that is necessary for the new feature to work. For this update, the following features do not work on the old data:
-
-- [Process instance parts](components/userguide/process-analysis/report-analysis/process-instance-parts.md)
-- [Canceled instances only filter](components/userguide/process-analysis/instance-state-filters.md#canceled-instances-only-filter)
-
-To enable this feature for your old data, follow the steps in the [engine data reimport guide](./../../reimport.md).
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.2-to-2.3.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.2-to-2.3.md
deleted file mode 100644
index ff442b325b8..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.2-to-2.3.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-id: 2.2-to-2.3
-title: "Update notes (2.2 to 2.3)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.3.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-### Broken links
-
-After the migration, you might encounter some unusual errors in Optimize:
-
-- Buttons or links are not working when you click on them.
-- You get errors in your web browser when you open the Optimize page.
-
-In this case, clear your browser cache so your browser loads the new Optimize resources.
-
-### Broken raw data reports
-
-Apart from caching issues, there is the following list of known data update limitations:
-
-- Raw data reports with custom column order are broken showing the following error when opened:
-
- ```javascript
- Cannot read property 'indexOf' of undefined
- ```
-
- To resolve this, either delete and recreate those reports or update to 2.4.0 which resolves the issue.
-
-- Combined process reports might cause the reports page to crash with the following error
-
- ```javascript
- Oh no :(
- Minified React error #130; visit http://facebook.github.io/react/docs/error-decoder.html?invariant=130&args[]=undefined&args[]= for the full message or use the non-minified dev environment for full errors and additional helpful warnings.
- ```
-
- To resolve this issue, update to 2.4.0 immediately.
-
-### Misinterpreted cron expressions
-
-The configuration of Optimize allows you to define when the history cleanup is triggered using cron expression notation. However, the values are incorrectly interpreted in Optimize. For example, the `historyCleanup.cronTrigger` configuration has the default value `0 1 * * *`, which should be 01:00 AM every day. Unfortunately, a bug causes this to be interpreted as every hour.
-
-To fix this, use the Spring [cron expression notation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html). For instance, the default value for `historyCleanup.cronTrigger` would then be `0 0 1 * * *`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.3-to-2.4.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.3-to-2.4.md
deleted file mode 100644
index 1a61bfa6647..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.3-to-2.4.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-id: 2.3-to-2.4
-title: "Update notes (2.3 to 2.4)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.4.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Changes in the supported environments
-
-With this Optimize version, the supported versions of Elasticsearch also change. Now, Optimize only connects to versions 6.2.0+. See the [Supported Environments]($docs$/reference/supported-environments) sections for details.
-
-Hence, you need to update Elasticsearch to use the new Optimize version. See the general [Elasticsearch Update Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) on how to do that. Usually, the only thing you need to do is to perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-## Known issues
-
-### Confusing warning during the update
-
-On executing the update, you may see the following warning a couple of times in the update log output:
-
-```
-Deprecated big difference between max_gram and min_gram in NGram Tokenizer, expected difference must be less than or equal to: [1]
-```
-
-You can safely ignore this warning. The update itself amends the relevant index settings so the warning will be resolved.
-
-## Misinterpreted cron expressions
-
-The configuration of Optimize allows you to define when the history cleanup is triggered using cron expression notation. However, the values are incorrectly interpreted in Optimize. For example, the `historyCleanup.cronTrigger` configuration has the default value `0 1 * * *`, which should be 01:00 AM every day. Unfortunately, a bug causes this to be interpreted as every hour.
-
-To fix this, use the Spring [cron expression notation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html). For instance, the default value for `historyCleanup.cronTrigger` would then be `0 0 1 * * *`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.4-to-2.5.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.4-to-2.5.md
deleted file mode 100644
index 77566683f5e..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.4-to-2.5.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-id: 2.4-to-2.5
-title: "Update notes (2.4 to 2.5)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.5.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Limitations
-
-If you intend to make use of the new [Multi-Tenancy-Feature](./../../configuration/multi-tenancy.md), you need to perform a [full reimport](../../reimport.md) and may need to amend your existing reports by selecting the tenant you want the report to be based on.
-
-## Known issues
-
-### Changes in the plugin system
-
-There are required changes for plugins implementing `VariableImportAdapter`.
-If you use such a plugin, perform the following steps:
-
-1. In the plugin, update the Optimize plugin dependency to version 2.5.
-2. The class `PluginVariableDto` now contains the new field `tenantId`. Depending on your plugin implementation, it might be necessary to include handling this field to not lose it on import.
-3. Build the new version of the plugin and replace the old `jar` with the new one.
-
-### Misinterpreted cron expressions
-
-The configuration of Optimize allows you to define when the history cleanup is triggered using cron expression notation. However, the values are incorrectly interpreted in Optimize. For example, the `historyCleanup.cronTrigger` configuration has the default value `0 1 * * *`, which should be 01:00 AM every day. Unfortunately, a bug causes this to be interpreted as every hour.
-
-To fix this, use the Spring [cron expression notation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html). For instance, the default value for `historyCleanup.cronTrigger` would then be `0 0 1 * * *`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.5-to-2.6.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.5-to-2.6.md
deleted file mode 100644
index 40704fff008..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.5-to-2.6.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-id: 2.5-to-2.6
-title: "Update notes (2.5 to 2.6)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.6.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## New behavior of Optimize
-
-With the introduction of the new collection and permission concept, you might find the behavior of Optimize startling and thus the subsequent sections will guide you through the changes.
-
-### Collection permissions & private reports
-
-With Optimize 2.6.0, a resource permission system is introduced. This system provides private reports/dashboard entities in the **Home** section as well as the possibility to manage permissions on collection entity level in order to share it with other Optimize users.
-
-This ultimately means that after the migration to Optimize 2.6.0, each user only sees the entities they originally created. This includes reports, dashboards, and collections. In order for other users to be able to access those entities, they need to be copied into a collection and view access to this new collection must be granted to other users.
-
-#### Grant access to a private report
-
-Given the scenario that the user `john` owns a report `John's Report` that user `mary` was used to access in Optimize 2.5.0 the user `john` can share this report in Optimize 2.6.0 with `mary` following these steps:
-
-1. User `john` creates a collection named e.g. `John's Share`.
- ![Create a Collection](img/private_report_access_1_create_collection.png)
-1. User `john` grants user `mary` the viewer role on the collection `John's Share`.
- ![Create Permission for Mary](img/private_report_access_2_create_view_permission_mary.png)
-1. User `john` copies and moves the `John's Report` report to the `John's Share` collection.
- ![Copy Report 1](img/private_report_access_3_1_copy_report.png)
- ![Copy Report 2](img/private_report_access_3_2_copy_report.png)
-1. User `mary` will now see the Collection `John's Share` in her **Home** section of Optimize.
- ![Mary sees shared collection](img/private_report_access_4_mary_sees_collection.png)
-
-#### Grant access to an existing collection
-
-Given the scenario that the user `john` owns a collection `John's Collection` that user `mary` was used to access in Optimize 2.5.0, the user `john` can share this collection with `mary` in Optimize 2.6.0, granting user `mary` a permission role on that collection. Refer to **Step 2** in [grant access to a private report](#grant-access-to-a-private-report).
-
-#### Super User role
-
-You can now grant users `Super User` permissions, which allows them to bypass the owner/collection permissions, enabling them to access all available entities. This can, for example, be useful if entities are owned by users that are not available anymore.
-
-To grant Super User permissions, see the [Authentication & Security Section](./../../configuration/system-configuration.md#security).
-
-## Known issues
-
-### Rebuild your Optimize plugins
-
-With Optimize 2.6.0, the plugin system was overhauled. For your plugins to continue to work, you have to rebuild them with the latest Optimize plugin artifact as an uber jar. Refer to the updated [plugin setup guide](./../../plugins/plugin-system.md#set-up-your-environment).
-
-### Misinterpreted cron expressions
-
-The configuration of Optimize allows you to define when the history cleanup is triggered using cron expression notation. However, the values are incorrectly interpreted in Optimize. For example, the `historyCleanup.cronTrigger` configuration has the default value `0 1 * * *`, which should be 01:00 AM every day. Unfortunately, a bug causes this to be interpreted as every hour.
-
-To fix this, use the Spring [cron expression notation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html). For instance, the default value for `historyCleanup.cronTrigger` would then be `0 0 1 * * *`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.6-to-2.7.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.6-to-2.7.md
deleted file mode 100644
index 5bf105c8018..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.6-to-2.7.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-id: 2.6-to-2.7
-title: "Update notes (2.6 to 2.7)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 2.7.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Changes in the supported environments
-
-With this Optimize version, there are also changes in the supported versions of Elasticsearch and Camunda 7.
-
-### Elasticsearch
-
-Optimize now requires at least Elasticsearch `6.4.0`.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-If you need to update your Elasticsearch cluster, refer to the general [Elasticsearch Update Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) on how to do that. Usually, the only thing you need to do is perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.10.6`.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-### Java
-
-Optimize now only supports Java 8, 11, and 13. Support for 12 was dropped as it reached [end of support](https://www.oracle.com/technetwork/java/java-se-support-roadmap.html).
-See the [Supported Environments]($docs$/reference/supported-environments/) sections for the full range of supported versions.
-
-## Known issues
-
-### Collection permissions get lost on failed identity sync
-
-Optimize has an identity synchronization in place that fetches all users from the engine that have access to Optimize. By doing this, Optimize can easily check if the user is allowed to access the application and is able to quickly display metadata, such as the email address and full name of the user.
-
-If you start Optimize `2.7` and the engine is down at the time of a user synchronization, it is possible that you will lose all your collection permissions. This is due to Optimize not being able to receive the correct authorizations for the collections and as a result, all the collection roles are removed.
-
-The easiest way to recover your permissions and regain access to your collections would be to add a user ID to the `auth.superUserIds` property of your [configuration file](./../../configuration/system-configuration.md#security), then re-adding the necessary permissions as this user.
-
-After you have regained the roles of your collections, you should consider one of the two next follow-up steps:
-
-- Preferred solution: Update to Optimize 3.2.0 to fix the issue.
-- Interim solution: In case you anticipate the engine being taken down, we recommend also stopping Optimize to prevent the same scenario from reoccurring. In addition, you can also change the frequency at which this collection cleanup occurs by adjusting the `import.identitySync.cronTrigger` expression in your [configuration file](./../../configuration/system-configuration.md#security) to `0 0 1 * * *`, which results in executing the sync once per day at 01:00 AM.
-
-### Misinterpreted cron expressions
-
-The configuration of Optimize allows you to define when the history cleanup is triggered using cron expression notation. However, the values are incorrectly interpreted in Optimize. For example, the `historyCleanup.cronTrigger` configuration has the default value `0 1 * * *`, which should be 01:00 AM every day. Unfortunately, a bug causes this to be interpreted as every hour.
-
-To fix this, use the Spring [cron expression notation](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html). For instance, the default value for `historyCleanup.cronTrigger` would then be `0 0 1 * * *`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md
deleted file mode 100644
index 81c0db8fe39..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-id: 2.7-to-3.0
-title: "Update notes (2.7 to 3.0)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.0.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-
-If you have done an Optimize update prior to this one, note the [changes in the update procedure](#changes-in-the-update-procedure).
-
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-### Potential NullpointerException on Update to 3.0.0
-
-In some circumstances, the update to 3.0.0 might fail with the following log output:
-
-```
- 06:00:00.000 - Starting step 1/9: UpdateIndexStep
- ...
- 06:00:02.066 - Error while executing update from 2.7.0 to 3.0.0
- java.lang.NullPointerException: null
- at org.camunda.optimize.upgrade.steps.schema.UpdateIndexStep.execute(UpdateIndexStep.java:71)
- ...
-```
-
-This is a known issue that occurs if you previously updated to Optimize 2.7.0. You can solve this issue by executing the following command on your Elasticsearch cluster before running the update again.
-
-```
-curl -s -XDELETE :9200/optimize-event_v2-000001
-```
-
-The update should now successfully complete.
-
-### Cannot disable import from particular engine
-
-In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./../../configuration/clustering.md), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
-
-## Limitations
-
-### User operation log import
-
-Optimize now imports the user operation log. Due to this, the engine user now requires engine permissions to read the user operation log, see also the [configuration documentation](./../../configuration/system-configuration-platform-7.md).
-
-### Suspension filter
-
-Due to a limitation of the user operations log data retrieval in the engine API, process instance suspension states of instances suspended after Optimize has been started are not correctly imported. This leads to inaccuracies in the [Suspended Instances Only Filter](components/userguide/process-analysis/instance-state-filters.md#suspended-and-non-suspended-instances-only-filter), which will only apply to instances which were suspended before they were imported by Optimize.
-
-Furthermore, since the suspension state of process instances in Optimize is updated according to historic data logs, if you have [history cleanup](./../../configuration/history-cleanup.md) enabled it is possible that the relevant data will be cleaned up before Optimize can import it, leading to inaccuracies in the state of suspended process instances which will then not appear in the appropriate filter.
-
-### Event-based processes
-
-There might be cases where an incorrect and lower than expected number of events are shown when mapping either process start and end events to nodes on your event based process, or
-when mapping multiple engine task events from the same engine model.
-
-These are known issues and are [fixed](https://jira.camunda.com/browse/OPT-3515) in the upcoming Optimize 3.1.0 release. If using this version or newer, you can correct previously imported data in your event-based process either
-by recreating or republishing the event based process.
-
-Alternatively, [forcing a reimport](./instructions.md#force-reimport-of-engine-data-in-optimize)
-of the engine data after updating to a version with this fix will correct these errors too.
-
-## Changes in the update procedure
-
-Although Optimize 3.0.0 is a major version change, we still allow a rolling update from 2.7 to the new version. However, since the support for Elasticsearch changed to the latest major version 7.X, there is an additional step in the update routine involved.
-
-Before you can perform the actual update, you need to do a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) of Elasticsearch from 6.X to 7.X. The exact details can be found in the [Migration & Update Instructions](./instructions.md).
-
-Please note that the following updates are not supported by Elasticsearch:
-
-- 6.8 to 7.0.
-- 6.7 to 7.1.–7.6.X.
-
-## Changes in the supported environments
-
-With this Optimize version, there are also changes in the supported versions of the Elasticsearch and Camunda 7.
-
-### Elasticsearch
-
-Optimize now requires at least Elasticsearch `7.0.0` and supports the latest major version up to `7.6.0`.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-In case you need to update your Elasticsearch cluster, refer to the general [Elasticsearch Update Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) on how to do that. Usually, the only thing you need to do is to perform a [rolling update](https://www.elastic.co/guide/en/elastic-stack/current/upgrading-elasticsearch.html#rolling-upgrades). There's also a dedicated section in the [Migration & Update Instructions](./instructions.md) on how to perform the rolling update.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.0-to-3.1.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.0-to-3.1.md
deleted file mode 100644
index e7cfca376f4..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.0-to-3.1.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-id: 3.0-to-3.1
-title: "Update notes (3.0 to 3.1)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.1.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Changes in the supported environments
-
-With this Optimize version, there are also changes in the supported versions of Camunda 7.
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.11.13`.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-## Breaking changes
-
-With Optimize 3.1.0, the [History Cleanup](./../../configuration/history-cleanup.md) configuration was restructured and needs to be adjusted accordingly.
-
-Major changes are the removal of the global feature flag `historyCleanup.enabled` in favor of entity type specific feature flags as well as a relocation of process and decision specific configuration keys. Refer to the [configuration documentation](./../../configuration/system-configuration.md#history-cleanup-settings) for details.
-
-With this release, Optimize now imports deployment data from the engine when importing definitions. If Optimize is importing from an authenticated engine, the configured user must now have READ permission on the `Deployment` resource.
-
-## Known issues
-
-### Event-based processes - event counts/suggestions
-
-As part of the update from Optimize 3.0 to 3.1, the event counts and the next suggested events used as part of the event based process feature are recalculated. Until the recalculation is complete, the event counts might be incorrect and the suggestions inaccurate.
-
-Once the recalculation is complete, the event counts will return to being correct and you will see more accurate suggested next events.
-
-### Decision report filter incompatibilities - update and runtime errors possible
-
-Due to a restriction in the database schema for decision reports, the usage of filters is limited in Optimize 3.1.0 as well as 3.2.0 and will only be fully working again in Optimize 3.3.0.
-This results in the behavior that once a certain filter type was used, e.g. a fixed evaluation date filter, another filter type cannot be used anymore, e.g. a relative evaluation date filter. This issue can occur at runtime as well as during the update.
-
-Usually, you will see a log similar to this one when you hit this issue:
-
-```
-{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"object mapping for [data.filter.data.start] tried to parse field [start] as object, but found a concrete value"}],"type":"mapper_parsing_exception","reason":"object mapping for [data.filter.data.start] tried to parse field [start] as object, but found a concrete value"},"status":400}
-```
-
-_We thus recommend removing all filters used on decision reports before updating to Optimize 3.1.0._
-
-## Limitations
-
-### User permissions
-
-With Optimize 3.1, user and group related permissions are checked by Optimize to determine whether the current user is authorized to access other users/groups within Optimize, for example when adding new roles to a collection.
-
-Due to this, it is now required to explicitly grant users the relevant authorizations, otherwise they will not be able to see other users and groups in Optimize. More information on authorizations can be found [here](./../../configuration/authorization-management.md#user-and-group-related-authorizations).
-
-### User operations log import
-
-With Optimize 3.1, the user operations log is imported to detect changes to running instances' suspension status. The user operations log informs Optimize when instance suspension requests have been received by the engine, and Optimize then reimports the relevant instances to ensure their suspension state is set correctly in Optimize.
-
-However, if instances are suspended using the engine API's `executionDate` parameter, with which suspension operations can be triggered with a delay, Optimize currently is not able to detect this delay, and will re-import the running process instances at the time the suspension operation is read from the user operations log, not at the time the suspension takes place. This can lead to inaccuracies in the suspension state of process instances in Optimize.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.1-to-3.2.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.1-to-3.2.md
deleted file mode 100644
index 9a379640e2d..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.1-to-3.2.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-id: 3.1-to-3.2
-title: "Update notes (3.1 to 3.2)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.3.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-### Decision report filter incompatibilities - update and runtime errors possible
-
-Due to a restriction in the database schema for decision reports, the usage of filters is limited in Optimize 3.2.0 and will only be fully working again in Optimize 3.3.0.
-
-This results in the behavior that once a certain filter type was used, e.g. a fixed evaluation date filter, another filter type cannot be used anymore, e.g. a relative evaluation date filter. This issue can occur at runtime as well as during the update.
-
-Usually, you will see a log similar to this one when you hit this issue:
-
-```
-{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"object mapping for [data.filter.data.start] tried to parse field [start] as object, but found a concrete value"}],"type":"mapper_parsing_exception","reason":"object mapping for [data.filter.data.start] tried to parse field [start] as object, but found a concrete value"},"status":400}
-```
-
-_We thus recommend removing all filters used on decision reports before updating to Optimize 3.2.0._
-
-## Changes in the supported environments
-
-With this Optimize version there are also changes in the supported versions of Elasticsearch.
-
-### Elasticsearch
-
-Optimize now supports Elasticsearch versions 7.7 and 7.8.
-
-See the [Supported Environments]($docs$/reference/supported-environments/) sections for the full range of supported versions.
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.12.11`, and `7.11.x` is not supported anymore.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-### Unexpected behavior
-
-#### Cancelled flow node filter
-
-With this version, Optimize now allows you to filter for process instances where a given set of flow nodes have been canceled, as well as for flow nodes or user tasks that have been canceled.
-
-However, any canceled flow nodes and user tasks already imported by Optimize before this release will not appear as canceled in Optimize so will continue to be treated the same as any other completed flow node or user task. To use these options for previously imported data, you will need to [force a reimport](../../../reimport) from the engine.
-
-## Limitations
-
-### No running flow node instances visible if blocked by an incident
-
-Optimize 3.2.0 introduces the visibility of [incidents](components/userguide/process-analysis/metadata-filters.md#incident-filter), but in contrast to Camunda Cockpit, Optimize currently does not show flow node instances in flow node view reports for those flow node instances that are blocked by an incident.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md
deleted file mode 100644
index c455b994fba..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-id: 3.10-to-3.11
-title: "Update notes (3.10 to 3.11)"
----
-
-:::note Heads up!
-To update Optimize to version 3.11, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.11 can be performed from any 3.10.x release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in supported environments
-- Changes in behavior (for example, due to a new feature)
-- Changes in translation resources
-
-## Known issues
-
-## Migration of Camunda 7 data for 3.11.0, 3.11.1, and 3.11.2
-
-Under some circumstances, the migration of Camunda 7 data for versions 3.11.0, 3.11.1, and 3.11.2 can cause issues with the tenant selection of report definitions and collection scopes. This only occurs if data is present in Elasticsearch with the value `zeebe` in the datasource field, which can happen if the `onboarding` dataset set is used in Optimize, for example. To avoid this issue, we recommend Camunda 7 users skip 3.11.0, 3.11.1, and 3.11.2, and instead migrate straight to Optimize version 3.11.3.
-
-## Changes in supported environments
-
-### Elasticsearch
-
-With 3.11, Optimize now supports Elasticsearch `8.8`. Elasticsearch `8.5`, `8.6` and `8.7` are no longer supported.
-Additionally, please note there are temporary changes in Optimize's Elasticsearch support as detailed below:
-
-| Optimize version | Elasticsearch version |
-| --------------------------------------- | -------------------------------- |
-| Optimize 3.10.0 - Optimize 3.10.3 | 7.16.2+, 7.17.0+, 8.5.0+, 8.6.0+ |
-| Optimize 3.10.4 | 7.16.2+, 7.17.0+, 8.7.0+, 8.8.0+ |
-| Optimize 3.10.5 - Optimize 8.3.x/3.11.x | 7.16.2+, 7.17.0+, 8.5.0+, 8.6.0+ |
-| Optimize 3.11.x | 8.8.0+ |
-
-See the [supported environments]($docs$/reference/supported-environments) section for the full range of supported versions.
-
-If you need to update your Elasticsearch cluster, refer to the general [Elasticsearch update guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html). Usually, the only thing you need to do is perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-### Java
-
-With this release, the minimum version of Java that Optimize supports is now Java 17. See the [Supported Environments]($docs$/reference/supported-environments) sections for more information on supported versions.
-
-### Plugins
-
-Optimize now runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins and are updating from version 3.10.3 or earlier, you will need to adjust your implementation accordingly.
-
-### Logging
-
-With the change to Spring Boot 3, Optimize's logging configuration format has also been updated. If you are updating from version 3.10.3 or earlier, please review the updated `environment-logback.xml` to make sure your configuration is valid.
-
-## Changes in behavior
-
-### Collection Role Cleanup
-
-Prior to Optimize 3.11, Optimize has performed collection role cleanup after syncing identities with the engine. From
-Optimize 3.11 onwards, this is now disabled by default. It can be reenabled by setting the
-`import.identitySync.collectionRoleCleanupEnabled` property value to `true`
-
-### API behavior
-
-Before the 3.11 release, the Optimize API would accept requests when the URI contained a trailing slash (`/`). This is no longer the case, and requests containing a trailing slash will no longer be matched to the corresponding API path.
-
-### Raw Data Report API
-
-:::caution
-These changes require you to adjust any integrations using the data mentioned below.
-:::
-
-The data structure of raw data reports has changed. For the data export API, the properties named `numberOfIncidents`, `numberOfOpenIncidents`, and `numberOfUserTasks` is now renamed to `incidents`, `openIncidents`, and `userTasks` respectively, and grouped together in a single property named `counts`.
-
-Before:
-
-```
-{
- ...
- results: {
- ...
- measures: [
- {
- processDefinitionKey: 'someKey',
- numberOfIncidents: 1,
- numberOfOpenIncidents: 0,
- numberOfUserTasks: 1,
- ...
- },
- ...
- ]
- }
-}
-```
-
-After:
-
-```
-{
- ...
- results: {
- ...
- measures: [
- {
- processDefinitionKey: 'someKey',
- counts: {
- incidents: 1,
- openIncidents: 0,
- userTasks: 1
- },
- ...
- },
- ...
- ]
- }
-}
-```
-
-For CSV export the properties are renamed to `count:incidents`, `count:openIncidents`, and `count:userTasks`.
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_310_311.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.11-to-3.12.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.11-to-3.12.md
deleted file mode 100644
index 02a7daf4248..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.11-to-3.12.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-id: 3.11-to-3.12
-title: "Update notes (3.11 to 3.12)"
----
-
-:::note Heads up!
-To update Optimize to version 3.12, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.12 can be performed from any 3.11 release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in supported environments
-- Changes in behavior (for example, due to a new feature)
-- Changes in translation resources
-
-## Changes in supported environments
-
-### Elasticsearch
-
-With 3.12, Optimize now supports Elasticsearch `8.9`. Elasticsearch `8.8` is no longer supported.
-
-If you need to update your Elasticsearch cluster, refer to the general [Elasticsearch update guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html). Usually, the only thing you need to do is perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_311_312.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.12-to-3.13.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.12-to-3.13.md
deleted file mode 100644
index 0e3e8d1808f..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.12-to-3.13.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-id: 3.12-to-3.13
-title: "Update notes (3.12 to 3.13)"
----
-
-:::note Heads up!
-To update Optimize to version 3.13, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.13 can be performed from any 3.12 release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in supported environments
-- Changes in behavior (for example, due to a new feature)
-- Changes in translation resources
-
-## Changes in supported environments
-
-### Camunda 7
-
-Optimize now supports up to `7.21.0+`.
-See the [supported environments]($docs$/reference/supported-environments/#camunda-platform-7--optimize-version-matrix) section for the full range of supported versions.
-
-## Changes in translation files
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_312_313.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.13-to-3.14.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.13-to-3.14.md
deleted file mode 100644
index 024e4c56a24..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.13-to-3.14.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-id: 3.13-to-3.14
-title: "Update notes (3.13 to 3.14)"
----
-
-:::note Heads up!
-To update Optimize to version 3.14, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.14 can be performed from any 3.13 release.
-
-For users of Optimize 3.7.3 with OpenSearch, there is a direct update path from 3.7.3 to 3.14. The required steps are described in the [migration and update instructions](./instructions.md).
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in supported environments
-- Changes in behavior (for example, due to a new feature)
-- Changes in translation resources
-
-## Limitations
-
-Not all Optimize features are supported when using OpenSearch as a database. For a full list of the features that are currently supported, please refer to the [Camunda 7 OpenSearch features](https://github.com/camunda/issues/issues/705).
-
-## Versioning
-
-As of Optimize 3.14, instances of Optimize running with Camunda 7 exclusively use the `3.x.x` versioning scheme. Instances of Optimize running with Camunda 8 exclusively use the `8.x.x` versioning scheme. This means you will only be able to update to Optimize 3.14 if you currently use Optimize 3.13/8.5 with Camunda 7. Optimize instances of versions 3.13/8.5 running with Camunda 8 cannot be upgraded to Optimize 3.14.
-
-To ensure that Optimize 8 upgrades are not applied to Operate instances using Camunda 7, the 3.14 upgrade runs a check against the connected database before executing, and exits the upgrade if any Camunda 8 data is present in your setup. Specifically, it validates that there is no data present in the `position-based-import-index`, which is exclusively used for Camunda 8 data imports.
-
-Contact [Camunda support](https://camunda.com/services/support/) if you encounter issues upgrading to 3.14 in your Camunda Platform 7 environment.
-
-## Changes in behavior
-
-### Telemetry
-
-Optimize no longer gathers telemetry data, and this is removed from the UI and Elasticsearch. The associated key in the configuration file (`telemetry.telemetryEndpoint`) was removed.
-
-## Changes in supported environments
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.20.0` and supports up to `7.22.0+`. Camunda 7 `7.19.x` is no longer supported.
-
-### Java
-
-Optimize now supports Java 21+.
-
-### Database
-
-Optimize now supports Elasticsearch 8.13.0+ or Amazon OpenSearch 2.9.0+
-
-See the [supported environments]($docs$/reference/supported-environments/#component-requirements) documentation for the full range of supported versions.
-
-## Changes in translation files
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_313_314.diff)
-
-- Lines with a `+` in the beginning mark a term addition/update. Lines with a `-` mark a term removal.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.2-to-3.3.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.2-to-3.3.md
deleted file mode 100644
index ef57ec15e5c..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.2-to-3.3.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-id: 3.2-to-3.3
-title: "Update notes (3.2 to 3.3)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.3.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-### Error during migration of dashboards when updating from Optimize 3.2.0 to 3.3.0
-
-During the update from Optimize 3.2.0 to 3.3.0, you may encounter the following error:
-
-```
-Starting step 6/7: UpdateIndexStep on index: dashboard
-Progress of task (id:FwvhN1jsRUe1JQD49-C3Qg:12009) on index optimize-dashboard_v4: 0% (total: 1, updated: 0, created: 0, deleted: 0)
-An Elasticsearch task that is part of the update failed: Error{type='script_exception', reason='runtime error', phase='null'}
-
-```
-
-This can happen if you started using an Optimize version prior to 3.1.0 in your environment in the past and did not manually edit/update at least one particular dashboard created with such a version since then.
-
-To recover from this situation, you can run the following update script on all Optimize dashboards on your Elasticsearch cluster:
-
-```
-curl --location --request POST 'localhost:9200/optimize-dashboard_v3/_update_by_query' \
---header 'Content-Type: application/json' \
---data-raw '{
- "script": {
- "source": "if (ctx._source.availableFilters == null) { ctx._source.availableFilters = [] }",
- "lang": "painless"
- }
-}'
-```
-
-Then, resume the update to Optimize 3.3.0 by rerunning it, thanks to Optimize updates being [resumable](https://camunda.com/blog/2021/01/camunda-optimize-3-3-0-released/#Resumable-Updates) since Optimize 3.3.0.
-
-## Breaking changes
-
-### Renamed environment folder to config
-
-The `environment` folder, which holds all configuration files, has been renamed to `config`.
-
-### Elasticsearch
-
-Optimize no longer supports Elasticsearch versions 7.0, 7.1 or 7.2.
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
-
-### Docker image environment variables
-
-Previously it was possible to use the `JAVA_OPTS` environment variable on the official Optimize Docker image to configure the JVM that runs Optimize. With Optimize 3.3.0 this variable was renamed to `OPTIMIZE_JAVA_OPTS`.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.3-to-3.4.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.3-to-3.4.md
deleted file mode 100644
index d042f1c64cb..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.3-to-3.4.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-id: 3.3-to-3.4
-title: "Update notes (3.3 to 3.4)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.4.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-When updating Optimize, certain features might not work out of the box for the old data. This is because old versions of Optimize
-do not fetch data that is necessary for the new feature to work. For this update, the following features do not work on the old data:
-
-- [Process instance parts](components/userguide/process-analysis/report-analysis/process-instance-parts.md)
-- [Canceled instances only filter](components/userguide/process-analysis/instance-state-filters.md#canceled-instances-only-filter)
-
-To enable this feature for your old data, follow the steps in the [engine data reimport guide](./../../reimport.md).
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.4-to-3.5.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.4-to-3.5.md
deleted file mode 100644
index ae907f2fd6f..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.4-to-3.5.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-id: 3.4-to-3.5
-title: "Update notes (3.4 to 3.5)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.5.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Limitations
-
-### Migration warning regarding incomplete UserTasks
-
-The migration from Optimize 3.4 to 3.5 includes some improvements to the Optimize process instance data structure. Previously, process instance data in Optimize held two distinct lists: one for all FlowNode data and one for UserTask data. To avoid redundancy, these lists are merged into one during this migration.
-
-In order to correctly merge the UserTask data contained in the two lists, specific ID fields are used to correlate UserTasks correctly. However, due to the nature of the Optimize import, UserTask data can temporarily exist within Optimize without some of these fields. Normally, these fields are updated by the next scheduled UserTask import, but if Optimize was shut down before this next UserTask import can run, the fields remain `null` and cannot be used during migration.
-
-Usually, this should only affect a small percentage of UserTasks and of this small percentage, the data that is lost during migration will only relate to the cancellation state or assignee/candidate group information. In practical terms, if you observe a warning regarding "x incomplete UserTasks that will be skipped during migration" in your update logs, this means that after the migration, x UserTasks in your system may be lacking assignee or candidate group information or may be marked as completed when in fact they were canceled.
-
-Note that any other UserTask data, old and new, will be complete.
-
-If this inaccuracy in past data is not acceptable to you, you can remedy this data loss by performing a reimport after migration. You can either run a complete reimport using [the reimport script](../../../reimport), or alternatively use the below statements to only reset those imports responsible for the data that was skipped during migration.
-
-Ensure Optimize is shut down before executing these import resets.
-
-Reset the `identityLinkLog` import to reimport assignee and candidate group data:
-
-```
-curl --location --request DELETE 'http://:/-timestamp-based-import-index_v4/_doc/identityLinkLogImportIndex-'
-```
-
-Reset the `completedActivity` import to reimport the correct cancellation state data:
-
-```
-curl --location --request DELETE 'http://:/-timestamp-based-import-index_v4/_doc/activityImportIndex-'
-```
-
-For example, assuming Elasticsearch is at `localhost:9200`, the engine alias is `camunda-bpm`, and the index prefix is `optimize`, the request to reset the `identityLinkLog` import translates to:
-
-```
-curl --location --request DELETE 'http://localhost:9200/optimize-timestamp-based-import-index_v4/_doc/identityLinkLogImportIndex-camunda-bpm'
-```
-
-If you have more than one engine configured, both requests need to be executed once per engine alias.
-
-## Known issues
-
-### Report edit mode fails for reports with flow node filters
-
-After updating to Optimize 3.5.0, you may encounter an issue that you cannot enter the edit mode on
-reports that use flow node selection filters.
-
-In such a case, when entering edit mode, you are confronted with the following error in the Web UI:
-
-```
- Cannot read property 'key' of undefined
-```
-
-This error can be resolved by running the following Elasticearch update query on your Optimize report index:
-
-```
-curl --location --request POST 'http://{esHost}:{esPort}/{indexPrefix}-single-process-report/_update_by_query' \
---header 'Content-Type: application/json' \
---data-raw '{
- "script" : {
- "source": "if(ctx._source.data.filter.stream().anyMatch(filter -> \"executedFlowNodes\".equals(filter.type)) && ctx._source.data.definitions.length == 1){for (filter in ctx._source.data.filter){filter.appliedTo = [ctx._source.data.definitions[0].identifier];}}",
- "lang": "painless"
- }
-}'
-```
-
-Applying this update query can be done anytime after the update to Optimize 3.5.0 was performed, even while Optimize 3.5.0 is already running.
-
-### Running 3.5 update on Optimize version 3.5 data results in NullPointerException
-
-The Optimize 3.5 update will not succeed if it is run on data which has already been updated to 3.5. This is because the 3.5 update relies on the 3.4 schema to be present in order to perform certain operations, which will fail with a `NullPointerException` if attempted on the 3.5 schema. This will cause the update to force quit. In this case, however, no further action is required as your data has already been updated to 3.5.
-
-## Unexpected behavior
-
-### Flow node selection in report configuration moved to flow node filter
-
-The flow node selection previously found in the report configuration menu has now been migrated to the flow node filter dropdown as a ["Flow Node Selection" Filter](components/userguide/process-analysis/flow-node-filters.md#flow-node-selection). Existing flow node selection configurations in old reports will be migrated to an equivalent Filter with the Optimize 3.5.0 migration. Note that this filter now also filters out instances which do not contain any flow nodes that match the filter.
-
-## Changes in requirements
-
-### Java
-
-With this release, support for Java 8 has been removed, meaning that Java 11 is now the only LTS version of Java that Optimize supports. See the [Supported Environments]($docs$/reference/supported-environments) sections for more information on supported versions.
-
-### Elasticsearch
-
-With this release, Optimize no longer supports Elasticsearch versions 7.5.1, 7.6.0 or 7.7.0. See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.5-to-3.6.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.5-to-3.6.md
deleted file mode 100644
index a9313bc55a3..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.5-to-3.6.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-id: 3.5-to-3.6
-title: "Update notes (3.5 to 3.6)"
----
-
-Camunda 7 only
-
-:::note Heads Up!
-To update Optimize to version 3.6.0, perform the following steps first: [Migration & Update Instructions](./instructions.md).
-:::
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-### Default tenants
-
-If you have [default tenants configured](./../../configuration/system-configuration-platform-7.md) for any connected engine in Optimize,
-it might be that user task and flow node reports, as well as branch analysis, stops showing data after updating to 3.6.0.
-
-This is a known
-issue that has been fixed as part of the 3.6.3 patch release. You can update from 3.6.0 to 3.6.3. Migration from either of these versions to
-3.7.0 will be possible.
-
-## Changes in supported environments
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.14.0` and supports up to `7.16.0+`. Camunda 7 `7.13.x` is not supported anymore.
-
-See the [Supported Environments]($docs$/reference/supported-environments) sections for the full range of supported versions.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.6-to-3.7.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.6-to-3.7.md
deleted file mode 100644
index 2f4c858f177..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.6-to-3.7.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-id: 3.6-to-3.7
-title: "Update notes (3.6 to 3.7.x)"
----
-
-Camunda 7 only
-
-:::note Heads up!
-To update Optimize to version 3.7.x, perform the following steps: [Migration & Update Instructions](./instructions.md).
-:::
-
-The update to 3.7.x can be performed from any 3.6.x release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (e.g due to a new feature)
-
-## Known issues
-
-The Optimize 3.7.0 release contains a number of bugs related to dashboard templates, alerts, and the Report Builder.
-
-For details on the issues, refer to the [Optimize 3.7.1 Release Notes](https://jira.camunda.com/secure/ReleaseNote.jspa?projectId=10730&version=17434).
-
-The Optimize 3.7.0 - 3.7.1 releases contain a bug in which decision instance object variables are erroneously attempted to be imported. This can lead to the decision variable import getting stuck.
-
-For details on the issues refer to the [Optimize 3.7.2 Release Notes](https://jira.camunda.com/secure/ReleaseNote.jspa?projectId=10730&version=17441).
-
-The Optimize 3.7.0 - 3.7.2 releases contain a bug in which object variables that contain a property with an empty string value cause an exception upon import which can block the import of further variables.
-
-For details on the issue refer to the [Optimize 3.7.3 Release Notes](https://jira.camunda.com/secure/ReleaseNote.jspa?projectId=10730&version=17452).
-
-We thus recommend updating to 3.7.3 if you are already using 3.7.0, 3.7.1, or 3.7.2, or directly updating to 3.7.3 if you are still running a 3.6.x release.
-
-## New behavior
-
-### Added support for object and list variables
-
-With Optimize 3.7, we've added support for object and list process variables. Variables with type `Object` are now automatically imported and flattened into dedicated "sub variables" for each object property. If you have previously used a variable import plugin to achieve the same, you may disable this plugin after migrating to Optimize 3.7.
-
-Find more information about importing object variables [here](./../../configuration/object-variables.md).
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.7-to-3.8.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.7-to-3.8.md
deleted file mode 100644
index c9ba567ff6d..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.7-to-3.8.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-id: 3.7-to-3.8
-title: "Update notes (3.7.x to 3.8.x)"
----
-
-:::note Heads up!
-To update Optimize to version 3.8.x, perform the following steps: [Migration & Update Instructions](./instructions.md).
-:::
-
-The update to 3.8.x can be performed from any 3.7.x release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (for example, due to a new feature)
-- Changes in translation resources
-
-## Known issues
-
-No known issues at the moment.
-
-## Changes in supported environments
-
-### Elasticsearch
-
-While OpenSearch was never officially supported by Optimize, up until Optimize 3.7, the version of the Elasticsearch client used was also compatible with OpenSearch.
-With this release, the client has been updated to a version no longer compatible with OpenSearch, meaning that Optimize will also no longer work with OpenSearch.
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.15.0` and supports up to `7.17.0+`. Camunda 7 `7.14.x` is not supported anymore.
-See the [supported environments]($docs$/reference/supported-environments/#camunda-platform-7--optimize-version-matrix) sections for the full range of supported versions.
-
-## New behavior
-
-Due to a general overhaul in the public API, the authentication to all API requests must now be performed via a `Bearer Token` in the request header. In previous versions, you had two possible ways to authenticate your API requests: by providing the secret as the query parameter `accessToken`, or by providing it in the request header as a `Bearer Token`. If you were using the latter method, no change is necessary and your requests will keep working as usual. If you were using the query parameter method, you will need to change your requests. For more information, see [authentication](../../../../apis-tools/optimize-api/optimize-api-authentication.md).
-
-## Changes in translation files
-
-In case you manage your own translations into different languages, you can find a list below with all the changes that need to be translated for this release.
-
-### Localization file
-
-The following terms have been added/removed to/from the localization file (`en.json`) since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_370_380.diff)
-
-- lines with a `+` in the beginning mark the addition/update of a term, lines with a `-` mark the removal of a term
-
-### Text from "What's new" dialogue
-
-For the purposes of translation, find the text for the `What's new` dialog below:
-
-```
-## Set and Track Time-Based Goals
-
-Set data-driven service level agreements (SLAs) on how long all your processes should take so you can quickly identify which processes are underperforming.
-
-## KPI Reports
-
-Create reports and alerts tracking percentages like fully automated instances or incident rate (%), plus SLA statistics on durations like P99 or P95 duration in addition to minimum, median, and maximum.
-
-## Improved UX
-
-Rename variables in plain language, filter out noisy outlier analysis heatmaps, and apply rolling date filters to your dashboards to focus on the most important data.
-
-For more details, review the [blog post](https://camunda.com/blog/2022/04/camunda-optimize-3-8-0-released/).
-```
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.8-to-3.9-preview.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.8-to-3.9-preview.md
deleted file mode 100644
index 5103270519d..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.8-to-3.9-preview.md
+++ /dev/null
@@ -1,58 +0,0 @@
----
-id: 3.8-to-3.9-preview-1
-title: "Update notes (3.8.x to 3.9.x-preview-1)"
----
-
-:::note Heads up!
-To update Optimize to version 3.9.x-preview-1, perform the following steps: [Migration & Update Instructions](./instructions.md).
-:::
-
-The update to 3.9.x-preview-1 can be performed from any 3.8.x release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (for example, due to a new feature)
-- Changes in translation resources
-
-## Known issues
-
-No known issues at the moment.
-
-## Changes in supported environments
-
-## New behavior
-
-## Changes in translation files
-
-In case you manage your own translations into different languages, you can find a list below with all the changes that need to be translated for this release.
-
-### Localization file
-
-The following terms have been added/removed to/from the localization file (`en.json`) since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_380_390_preview_1.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
-
-### Text from "What's new" dialogue
-
-For the purposes of translation, find the text for the `What's new` dialog below:
-
-```
-## Process Overview
-
-See holistic statistics across your entire portfolio of processes with some suggested focus areas for improvement.
-
-## Process Onboarding
-
-Create a dedicated KPI collection and dashboard with one click, then modify your targets and share it with stakeholders.
-
-## KPI Overview
-
-See how all your process KPIs perform in one screen, then identify which processes need the most improvement.
-
-For more details, review the [blog post](https://camunda.com/blog/2022/07/camunda-optimize-3-9-0-preview-released/).
-```
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-preview-to-3.9.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-preview-to-3.9.md
deleted file mode 100644
index d3e1750be9e..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-preview-to-3.9.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-id: 3.9-preview-1-to-3.9
-title: "Update notes (3.9-preview-x to 3.9.x)"
----
-
-:::note Heads up!
-To update Optimize to version 3.9.x, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.9.x can be performed from any 3.8.x or any 3.9.0-preview release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in the supported environments
-- Any unexpected behavior of Optimize (for example, due to a new feature)
-- Changes in translation resources
-
-## Known issues
-
-If there are processes in Optimize that currently do not have a process owner assigned and a new process is deployed
-via Web Modeler, a new process owner may be assigned to one of the previous
-processes without an owner. This is not critical as this does not incur any changes in permissions, but is important to understand regarding who gets email notifications for processes. If an owner is set incorrectly, you can change it manually in the processes page.
-This issue is resolved with the 3.9.1 version.
-
-## Changes in supported environments
-
-### Camunda 7
-
-Optimize now requires at least Camunda 7 `7.16.0` and supports up to `7.18.0+`. Camunda 7 `7.15.x` is not supported anymore.
-See the [supported environments]($docs$/reference/supported-environments/#camunda-platform-7--optimize-version-matrix) section for the full range of supported versions.
-
-### Elasticsearch
-
-Optimize now requires at least Elasticsearch `7.13.0`.
-See the [supported environments]($docs$/reference/supported-environments) section for the full range of supported versions.
-
-If you need to update your Elasticsearch cluster, refer to the general [Elasticsearch update guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html). Usually, the only thing you need to do is perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-## Changes in translation files
-
-In case you manage your own translations into different languages, you can find a diff below with all the changes that need to be translated for this release.
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_390_preview_1_390.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md
deleted file mode 100644
index 5b203450a62..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-id: 3.9-to-3.10
-title: "Update notes (3.9.x to 3.10)"
----
-
-:::note Heads up!
-To update Optimize to version 3.10, perform the steps in the [migration and update instructions](./instructions.md).
-:::
-
-The update to 3.10 can be performed from any 3.9.x release.
-
-Here you will find information about:
-
-- Limitations
-- Known issues
-- Changes in supported environments
-- Changes in behavior (for example, due to a new feature)
-- Changes in translation resources
-
-## Changes in supported environments
-
-### Elasticsearch
-
-Optimize now supports Elasticsearch `8.5` and `8.6`, but it requires at least Elasticsearch `7.16.2`.
-Additionally, when updating to Optimize 3.10.x please note there are temporary changes in Optimize's Elasticsearch support as detailed below:
-
-| Optimize version | Elasticsearch version |
-| --------------------------------- | -------------------------------- |
-| Optimize 3.10.0 - Optimize 3.10.3 | 7.16.2+, 7.17.0+, 8.5.0+, 8.6.0+ |
-| Optimize 3.10.4 | 7.16.2+, 7.17.0+, 8.7.0+, 8.8.0+ |
-| Optimize 3.10.5 - Optimize 3.10.x | 7.16.2+, 7.17.0+, 8.5.0+, 8.6.0+ |
-
-See the [supported environments]($docs$/reference/supported-environments) section for the full range of supported versions.
-
-If you need to update your Elasticsearch cluster, refer to the general [Elasticsearch update guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html). Usually, the only thing you need to do is perform a [rolling update](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html).
-
-### Java
-
-From Optimize 3.10.4, the minimum version of Java that Optimize supports is now Java 17. See the [Supported Environments]($docs$/reference/supported-environments) sections for more information on supported versions.
-
-### Plugins
-
-From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
-
-### Logging
-
-In 3.10.4, Optimize's logging configuration format has also been updated. Please review the updated `environment-logback.xml` to make sure your configuration is valid.
-
-## Changes in behavior
-
-### API behavior
-
-Before the 3.10.4 release, the Optimize API would accept requests when the URI contained a trailing slash (`/`). This is no longer the case, and requests containing a trailing slash will no longer be matched to the corresponding API path.
-
-### Configuration changes
-
-In the 3.10 version of Optimize, it is no longer possible to apply custom configuration to the UI header. The following
-configuration options have therefore been removed:
-
-- ui.header.textColor
-- ui.header.pathToLogoIcon
-- ui.header.backgroundColor
-
-## Changes in translation files
-
-In case you manage your own translations into different languages, you can find a diff below with all the changes that need to be translated for this release.
-
-### Localization file
-
-The following terms have been added to or removed from the localization file `en.json` since the last release:
-
-[en.json.diff](../translation-diffs/differences_localization_390_3100.diff)
-
-- Lines with a `+` in the beginning mark the addition/update of a term; lines with a `-` mark the removal of a term.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_1_create_collection.png b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_1_create_collection.png
deleted file mode 100644
index 212e86b5511..00000000000
Binary files a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_1_create_collection.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_2_create_view_permission_mary.png b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_2_create_view_permission_mary.png
deleted file mode 100644
index 15001043127..00000000000
Binary files a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_2_create_view_permission_mary.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_1_copy_report.png b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_1_copy_report.png
deleted file mode 100644
index f6a77e63a38..00000000000
Binary files a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_1_copy_report.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_2_copy_report.png b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_2_copy_report.png
deleted file mode 100644
index ff9bdb06f5b..00000000000
Binary files a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_3_2_copy_report.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_4_mary_sees_collection.png b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_4_mary_sees_collection.png
deleted file mode 100644
index 8ef4dfc1020..00000000000
Binary files a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/img/private_report_access_4_mary_sees_collection.png and /dev/null differ
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/instructions.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-7/instructions.md
deleted file mode 100644
index ebf210b86f1..00000000000
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-7/instructions.md
+++ /dev/null
@@ -1,150 +0,0 @@
----
-id: instructions
-title: "Instructions"
-description: "Find out how to update to a new version of Optimize without losing your reports and dashboards."
----
-
-Optimize releases two new minor versions a year. These documents guide you through the process of migrating your Optimize from one Optimize minor version to the other.
-
-If you want to update Optimize by several versions, you cannot do that at once, but you need to perform the updates in sequential order. For instance, if you want to update from 2.5 to 3.0, you need to update first from 2.5 to 2.6, then from 2.6 to 2.7, and finally from 2.7 to 3.0. The following table shows the recommended update paths to the latest version:
-
-| Update from | Recommended update path to 3.14 |
-| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 3.14 | You are on the latest version. |
-| 3.0 - 3.13.x | Rolling update to 3.14 |
-| 3.7.3 OpenSearch | There is a direct update path to 3.14 (Fast-Track), see instructions in [section 5](#5-direct-update-from-optimize-373-to-314-for-opensearch-installations) |
-| 3.7.x OpenSearch | 1. Rolling update to 3.7.3
2. Direct update to 3.14 |
-| 2.0 - 2.7 | 1. Rolling update to 2.7
2. Rolling update from 2.7 to 3.0 |
-| 1.0 - 1.5 | No update possible. Use the latest version directly. |
-
-## Migration instructions
-
-You can migrate from one version of Optimize to the next one without losing data. To migrate to the latest version, please perform the following steps:
-
-:::note ElasticSearch and OpenSearch databases
-All the steps below are applicable to ElasticSearch and OpenSearch installations. To avoid duplication, we will only be referring to `Database` in the following instructions and will explicitly mention when a step is applicable only to ElasticSearch or OpenSearch.
-:::
-
-### 1. Preparation
-
-- Make sure that the database has enough memory. To do that, shut down the database and go the `config` folder of your distribution. There you should find a file called `jvm.options`. Change the values of the two properties `Xms` and `Xmx` to at least `1g` so that the database has enough memory configured. This configuration looks as follows:
-
-```bash
--Xms1g
--Xmx1g
-```
-
-- Restart the database and make sure that the instance is up and running throughout the entire migration process.
-- You will need to shut down Optimize before starting the migration, resulting in downtime during the entire migration process.
-- Back up your database instance ([ElasticSearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) / [OpenSearch](https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore/)) in case something goes wrong during the migration process. This is recommended, but optional.
-- Make sure that you have enough storage available to perform the migration. During the migration process it can be the case that up to twice the amount of the storage of your database data is needed. (Highly recommended)
-- Back up your `environment-config.yaml` and `environment-logback.xml` located in the `config` folder of the root directory of your current Optimize. (Optional)
-- If you are using Optimize plugins it might be required to adjust those plugins to the new version. To do this, go to the project where you developed your plugins, increase the project version in maven to new Optimize version and build the plugin again (checkout the [plugin guide](../../plugins/plugin-system.md) for the details on that). Afterwards, add the plugin jar to the `plugin` folder of your new Optimize distribution. (Optional)
-- Start the new Optimize version, as described in the [installation guide](../../install-and-start.md).
-- It is very likely that you configured the logging of Optimize to your needs and therefore you adjusted the `environment-logback.xml` in the `config` folder of the root directory of your **old** Optimize. You can now use the backed up logging configuration and put it in the `config` folder of the **new** Optimize to keep your logging adjustments. (Optional)
-
-### 2. Rolling update to the new database version
-
-You only need to execute this step if you want to update the Elasticsearch (ES) or OpenSearch (OS) version during the update. In case the ES/OS version stays the same, you can skip this step.
-
-The database update is usually performed in a rolling fashion. Read all about how to do the update in the general [Elasticsearch Update Guide](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html) / [OpenSearch Update Guide](https://opensearch.org/docs/latest/install-and-configure/upgrade-opensearch/index/) and consult the [rolling upgrade ES](https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html) / [rolling upgrade OS](https://opensearch.org/docs/2.17/install-and-configure/upgrade-opensearch/rolling-upgrade/) guide on how to conduct the rolling update.
-
-### 3. Perform the migration
-
-- Go to the [enterprise download page](https://docs.camunda.org/enterprise/download/#camunda-optimize) and download the new version of Optimize you want to update to. For instance, if your current version is Optimize 2.2, you should download the version 2.3. Extract the downloaded archive in your preferred directory. The archive contains the Optimize application itself and the executable to update Optimize from your old version to the new version.
-- In the `config` folder of your **current** Optimize version, you have defined all configuration in the `environment-config.yaml` file, e.g. for Optimize to be able to connect to the engine and the database. Copy the old configuration file and place it in the `config` folder of your **new** Optimize distribution. Bear in mind that the configuration settings might have changed and thus the new Optimize won't recognize your adjusted settings or complain about settings that are outdated and therefore refuses to startup. Best checkout the Update Notes subsections for deprecations.
-
-#### 3.1 Manual update script execution
-
-This approach requires you to manually execute the update script. You can perform this from any machine that has access to your Elasticsearch/OpenSearch cluster.
-
-- Open up a terminal, change to the root directory of your **new** Optimize version and run the following command: `./upgrade/upgrade.sh` on Linux or `./upgrade/upgrade.bat` on Windows. For OpenSearch installations, please make sure to set the environment variable `CAMUNDA_OPTIMIZE_DATABASE=opensearch` before executing the update script.
-- During the execution the executable will output a warning to ask you to back-up your database data. Type `yes` to confirm that you have backed up the data.
-- Feel free to [file a support case](https://camunda.com/services/enterprise-support-guide/) if any errors occur during the migration process.
-- To get more verbose information about the update, you can adjust the logging level as it is described in the [configuration documentation](./../../configuration/logging.md).
-
-#### 3.2 Automatic update execution (Optimize >3.2.0)
-
-With the Optimize 3.2.0 release the update can also be executed as part of the Optimize startup. In order to make use of this functionality, the command flag `--upgrade` has to be passed to the Optimize startup script:
-
-```bash
-For UNIX:
-./optimize-startup.sh --upgrade
-
-For Windows:
-./optimize-startup.bat --upgrade
-```
-
-This will run the update prior to starting up Optimize and only then start Optimize.
-
-In Docker environments this can be achieved by overwriting the default command of the docker container (being `./optimize.sh`), e.g. like in the following [docker-compose](https://docs.docker.com/compose/) snippet:
-
-```
-version: '2.4'
-
-services:
- optimize:
- image: registry.camunda.cloud/optimize-ee/optimize:latest
- command: ["./optimize.sh", "--upgrade"]
-```
-
-However, as this may prolong the container boot time significantly which may conflict with [container status probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) in managed environments like [Kubernetes](https://kubernetes.io/) we recommend using the [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) feature there to run the update:
-
-```
- labels:
- app: optimize
-spec:
- initContainers:
- - name: migration
- image: registry.camunda.cloud/optimize-ee/optimize:latest
- command: ['./upgrade/upgrade.sh', '--skip-warning']
- containers:
- - name: optimize
- image: registry.camunda.cloud/optimize-ee/optimize:latest
-```
-
-### 4. Resume a canceled update
-
-From Optimize 3.3.0 onwards updates are resumable. So if the update process got interrupted either manually or due to an error you don't have to restore the database backup and start over but can simply rerun the update. On resume previously completed update steps will be detected and logged as being skipped. In the following log example **Step 1** was previously completed and is thus skipped:
-
-```
-./upgrade/upgrade.sh
-...
-INFO UpgradeProcedure - Skipping Step 1/2: UpdateIndexStep on index: process-instance as it was found to be previously completed already at: 2020-11-30T16:16:12.358Z.
-INFO UpgradeProcedure - Starting step 2/2: UpdateIndexStep on index: decision-instance
-...
-```
-
-### 5. Direct Update from Optimize 3.7.3 to 3.14 for OpenSearch installations
-
-Optimize 3.14 supports a direct update path from Optimize 3.7.3 to 3.14 for OpenSearch installations (Fast-Track update). In case you are using a previous 3.7.x version, you need to update to 3.7.3 first by following the normal update procedure. To perform the Fast-Track update, follow the steps below:
-
-1. Perform steps 1 and 2 as described above. Please note, the backup step is NOT optional, please make sure you perform a full backup of your OpenSearch database before proceeding.
-2. Set the environment variables:
- 1. `CAMUNDA_OPTIMIZE_DATABASE=opensearch`
- 2. `CAMUNDA_OPTIMIZE_OPENSEARCH_HTTP_PORT=` (e.g. 9200)
-3. Since the Fast-Track update is expected to require a significant amount of time, only the manual update script execution is supported. Therefore execute the update script as described in [section 3.1](#31-manual-update-script-execution) with the following additional parameter: `-fastTrack`. Example:
-
- ```bash
- For UNIX:
- ./upgrade/upgrade.sh -fastTrack
-
- For Windows:
- ./upgrade/upgrade.bat -fastTrack
- ```
-
-4. You will be prompted with a confirmation that you wish to perform a direct update from 3.7.3 to 3.14. Type `yes` to confirm you have backed up your data and that you wish to proceed with the Fast-Track update.
-
-### 6. Typical errors
-
-- Using an update script that does not match your version:
-
-```bash
-Schema version saved in Metadata does not match required [2.X.0]
-```
-
-Let's assume have Optimize 2.1 and want to update to 2.3 and use the jar to update from 2.2 to 2.3. This error occurs because the jar expects the database to have the schema version 2.1. This is because you downloaded the wrong Optimize artifact which contained the wrong update jar version.
-
-## Force reimport of engine data in Optimize (ElasticSearch installations only)
-
-It can be the case that features that were added with the new Optimize version do not work for data that was imported with the old version of Optimize. If you want to use new features on the old data, you can force a reimport of the engine data to Optimize. See [the reimport guide](./../../reimport.md) on how to perform such a reimport.
diff --git a/optimize/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md b/optimize/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
index 862e36c8211..98c1b78c5fd 100644
--- a/optimize/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
+++ b/optimize/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
@@ -49,7 +49,7 @@ From Optimize 3.10.4, the minimum version of Java that Optimize supports is now
### Plugins
-From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
+From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](#) and the [Single-Sign-On Plugin](#) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
### Logging
diff --git a/optimize/self-managed/optimize-deployment/plugins/businesskey-import-plugin.md b/optimize/self-managed/optimize-deployment/plugins/businesskey-import-plugin.md
deleted file mode 100644
index 7824bf222be..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/businesskey-import-plugin.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-id: businesskey-import-plugin
-title: "Business key import customization"
-description: "Adapt the process instance import so you can customize the associated business keys."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature enables you to customize business keys during the process instance import, e.g. if your business keys contain sensitive information that requires anonymization.
-
-The Optimize plugin system contains the following interface:
-
-```java
-public interface BusinessKeyImportAdapter {
-
- String adaptBusinessKeys(String businessKey);
-}
-```
-
-Implement this to adjust the business keys of the process instances to be imported. Given is the business key of a process instance that would be imported if no further action is performed. The returned string is the customized business key of the process instance that will be imported.
-
-The following shows an example of a customization of business keys during the process instance import in the package `optimize.plugin` where every business key is set to 'foo'.
-
-```java
-package org.mycompany.optimize.plugin;
-
-import org.camunda.optimize.plugin.importing.businesskey.BusinessKeyImportAdapter;
-import java.util.List;
-
- public class MyCustomBusinessKeyImportAdapter implements BusinessKeyImportAdapter {
-
- @Override
- public String adaptBusinessKey(String businessKey) {
- return "foo";
- }
-
-}
-```
-
-Now, when `MyCustomBusinessKeyImportAdapter`, packaged as a `jar` file, is added to Optimize's `plugin` folder, we just have to add the following property to the `environment-config.yaml` file:
-
-```yaml
-plugin:
- businessKeyImport:
- # Look in the given base package list for businesskey import adaption plugins.
- # If empty, the import is not influenced.
- basePackages: ["org.mycompany.optimize.plugin"]
-```
-
-For more information on how this plugin works, have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples#getting-started-with-business-key-import-plugins).
diff --git a/optimize/self-managed/optimize-deployment/plugins/decision-import-plugin.md b/optimize/self-managed/optimize-deployment/plugins/decision-import-plugin.md
deleted file mode 100644
index 6b26a796bf8..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/decision-import-plugin.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-id: decision-import-plugin
-title: "Decision inputs and outputs import customization"
-description: "Enrich or filter the Decision inputs and outputs so you can customize which and how these are imported to Optimize."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature enables you to enrich, modify, or filter the decision input and output instances, e.g., if instances in Camunda contain IDs of instances in another database and you would like to resolve those references to the actual values.
-
-The plugin system contains the following interfaces:
-
-```java
-public interface DecisionInputImportAdapter {
-
- List adaptInputs(List inputs);
-}
-```
-
-```java
-public interface DecisionOutputImportAdapter {
-
- List adaptOutputs(List outputs);
-}
-```
-
-Implement these to adjust the input and output instances to be imported. The methods take a list of instances that would be imported if no further action is performed as parameter. The returned list is the customized list with the enriched/filtered instances that will be imported. To create new instances, you can use the `PluginDecisionInputDto` and `PluginDecisionOutputDto` classes as data transfer object (DTO), which are also contained in the plugin system.
-
-:::note
-All class members need to be set in order, otherwise the instance is ignored, as this may lead to problems during data analysis.
-
-The data from the engine is imported in batches. This means the `adaptInput/adaptOutput` method is called once per batch rather than once for all data. For instance, if you have 100 000 decision instances in total and if the batch size is 10,000, the plugin function will be called 10 times.
-:::
-
-Next, package your plugin into a `jar` file and then add the `jar` file to the `plugin` folder of your Optimize directory. Finally, add the name of the base package of your custom `DecisionOutputImportAdapter/DecisionInputImportAdapter` to the `environment-config.yaml` file:
-
-```yaml
-plugin:
- decisionInputImport:
- # Look in the given base package list for decision input import adaption plugins.
- # If empty, the import is not influenced.
- basePackages: ["org.mycompany.optimize.plugin"]
- decisionOutputImport:
- # Look in the given base package list for decision output import adaption plugins.
- # If empty, the import is not influenced.
- basePackages: ["org.mycompany.optimize.plugin"]
-```
-
-The following shows an example of a customization of the decision input import in the package `org.mycompany.optimize.plugin`, where every string input is assigned the value 'foo':
-
-```java
-package org.mycompany.optimize.plugin;
-
-import org.camunda.optimize.plugin.importing.variable.DecisionInputImportAdapter;
-import org.camunda.optimize.plugin.importing.variable.PluginDecisionInputDto;
-
-import java.util.List;
-
-public class SetAllStringInputsToFoo implements DecisionInputImportAdapter {
-
- public List adaptInputs(List inputs) {
- for (PluginDecisionInputDto input : inputs) {
- if (input.getType().toLowerCase().equals("string")) {
- input.setValue("foo");
- }
- }
- return inputs;
- }
-}
-```
-
-Now, when `SetAllStringInputsToFoo`, packaged as a `jar` file, is added to the `plugin` folder, we just have to add the following property to the `environment-config.yaml` file to make the plugin work:
-
-```yaml
-plugin:
- decisionInputImport:
- # Look in the given base package list for decision input import adaption plugins.
- # If empty, the import is not influenced.
- basePackages: ["org.mycompany.optimize.plugin"]
-```
-
-For more information and example implementations, have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples#getting-started-with-decision-import-plugins).
diff --git a/optimize/self-managed/optimize-deployment/plugins/elasticsearch-header.md b/optimize/self-managed/optimize-deployment/plugins/elasticsearch-header.md
deleted file mode 100644
index 4c19fb1047c..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/elasticsearch-header.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-id: elasticsearch-header
-title: "Elasticsearch header"
-description: "Register your own hook into the Optimize Elasticsearch client to add custom headers to requests."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature allows you to register your own hook into the Optimize Elasticsearch client, allowing you to add custom headers to all requests made to Elasticsearch. The plugin is invoked before every request to Elasticsearch is made, allowing different
-headers and values to be added per request. This plugin is also loaded during the update and reimport.
-
-For that, the Optimize plugin system provides the following interface:
-
-```java
-public interface ElasticsearchCustomHeaderSupplier {
-
- CustomHeader getElasticsearchCustomHeader();
-}
-```
-
-Implement this interface and return the custom header you would like to be added to Elasticsearch requests. The `CustomHeader`
-class has a single Constructor taking two arguments, as follows:
-
-```java
-public CustomHeader(String headerName, String headerValue)
-```
-
-The following example returns a header that will be added:
-
-```java
-package com.example.optimize.elasticsearch.headers;
-
-import org.camunda.optimize.plugin.elasticsearch.CustomHeader;
-import org.camunda.optimize.plugin.elasticsearch.ElasticsearchCustomHeaderSupplier;
-
-public class AddAuthorizationHeaderPlugin implements ElasticsearchCustomHeaderSupplier {
-
- private String currentToken;
-
- public CustomHeader getElasticsearchCustomHeader() {
- if (currentToken == null || currentTokenExpiresWithinFifteenMinutes()) {
- currentToken = fetchNewToken();
- }
- return new CustomHeader("Authorization", currentToken);
- }
-}
-```
-
-Similar to the other plugins' setup, you have to package your plugin in a `jar`, add it to Optimize's `plugin` folder, and make Optimize find it by adding the following configuration to `environment-config.yaml`:
-
-```yaml
-plugin:
- elasticsearchCustomHeader:
- # Look in the given base package list for Elasticsearch custom header fetching plugins.
- # If empty, ES requests are not influenced.
- basePackages: ["com.example.optimize.elasticsearch.headers"]
-```
-
-For more information and example implementations, have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples#getting-started-with-elasticsearch-header-plugins).
diff --git a/optimize/self-managed/optimize-deployment/plugins/engine-rest-filter-plugin.md b/optimize/self-managed/optimize-deployment/plugins/engine-rest-filter-plugin.md
deleted file mode 100644
index 3ab8fc6c08f..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/engine-rest-filter-plugin.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-id: engine-rest-filter-plugin
-title: "Engine REST filter"
-description: "Register your own REST filter that is called for every REST call to the engine."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature allows you to register your own filter that is called for every REST call to one of the configured process engines.
-For that, the Optimize plugin system provides the following interface:
-
-```java
-public interface EngineRestFilter {
-
- void filter(ClientRequestContext requestContext, String engineAlias, String engineName) throws IOException;
-}
-```
-
-Implement this interface to adjust the JAX-RS client request, which is represented by `requestContext`, sent to the process engine's REST API.
-If the modification depends on the process engine, you can analyze the value of `engineAlias` and/or `engineName` to decide what adjustment is needed.
-
-The following example shows a filter that simply adds a custom header to every REST call:
-
-```java
-package com.example.optimize.enginerestplugin;
-
-import java.io.IOException;
-import jakarta.ws.rs.client.ClientRequestContext;
-
-public class AddCustomTokenFilter implements EngineRestFilter {
-
- @Override
- public void filter(ClientRequestContext requestContext, String engineAlias, String engineName) throws IOException {
- requestContext.getHeaders().add("Custom-Token", "SomeCustomToken");
- }
-
-}
-```
-
-Similar to other plugins, you have to package your plugin in a `jar`, add it to the `plugin` folder, and enable Optimize to find it by adding the following configuration to `environment-config.yaml`:
-
-```yaml
-plugin:
- engineRestFilter:
- #Look in the given base package list for engine rest filter plugins.
- #If empty, the REST calls are not influenced.
- basePackages: ["com.example.optimize.enginerestplugin"]
-```
diff --git a/optimize/self-managed/optimize-deployment/plugins/plugin-system.md b/optimize/self-managed/optimize-deployment/plugins/plugin-system.md
deleted file mode 100644
index 4273e7c61e0..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/plugin-system.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-id: plugin-system
-title: "Optimize plugin system"
-description: "Explains the principle of plugins in Optimize and how they can be added."
----
-
-Camunda 7 only
-
-Optimize allows you to adapt the behavior of Optimize, e.g. to decide which kind of data should be analyzed and to tackle technical issues.
-
-Have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples) to see some use cases for the plugin system and how plugins can be implemented and used.
-
-## Set up your environment
-
-First, add the Optimize plugin to your project via maven:
-
-```xml
-
- org.camunda.optimize
- plugin
- {{< currentVersionAlias >}}
-
-```
-
-:::note
-It is important to use the same plugin environment version as the Optimize version you plan to use.
-Optimize rejects plugins that are built with different Optimize versions to avoid compatibility problems.
-This also means that to update to newer Optimize versions it is necessary to build the plugin again with the new version.
-:::
-
-To tell Maven where to find the plugin environment, add the following repository to your project:
-
-```xml
-
-
- camunda-bpm-nexus
- camunda-bpm-nexus
-
- https://artifacts.camunda.com/artifactory/camunda-optimize/
-
-
-
-```
-
-:::note
-To make this work, you need to add your nexus credentials and the server to your `settings.xml`.
-:::
-
-It is required to create an uber `jar` so Optimize can load third-party dependencies and to validate the used Optimize version.
-You can add the following to your project:
-
-```xml
-
- install
-
-
- org.apache.maven.plugins
- maven-assembly-plugin
- 3.1.0
-
-
- package
-
- single
-
-
- ${project.artifactId}
-
- jar-with-dependencies
-
-
-
-
-
-
-
-```
-
-:::note
-By default, Optimize loads plugin classes isolated from the classes used in Optimize.
-This allows you to use library versions for the plugin that differ from those used in Optimize.
-:::
-
-If you want to use the provided Optimize dependencies instead, it is possible to exclude them from
-the uber `jar` by setting the scope of those dependencies to `provided`. Then, Optimize does not load them from the plugin.
-This might have side effects if the used version in the plugin is different to the one provided by Optimize.
-To get an overview of what is already provided by Optimize, have a look at
-the [third-party libraries]($docs$/reference/dependencies).
-
-## Debug your plugin
-
-To start Optimize in debug mode, execute the Optimize start script with a debug parameter.
-
-On Unix systems, this could look like the following
-
-- For the demo distribution:
-
-```
-./optimize-demo.sh --debug
-```
-
-- For the production distribution:
-
-```
-./optimize-startup.sh --debug
-```
-
-On a Windows system this could look like the following:
-
-- For the demo distribution:
-
-```
-.\optimize-demo.bat --debug
-```
-
-- For the production distribution:
-
-```
-.\optimize-startup.bat --debug
-```
-
-By default, this will open up a debug port on 9999. Once you have set this up, you need to open the project where you implemented the plugin in your favorite IDE and connect to the debug port.
-
-To change the default debug port, have a look into `optimize-startup.sh` on Linux/Mac or `optimize-startup.bat` on Windows systems. There, you should find a variable called `DEBUG_PORT` which allows you to customize the port.
diff --git a/optimize/self-managed/optimize-deployment/plugins/single-sign-on.md b/optimize/self-managed/optimize-deployment/plugins/single-sign-on.md
deleted file mode 100644
index 4d2439a6918..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/single-sign-on.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-id: single-sign-on
-title: "Single sign on"
-description: "Register your own hook into the Optimize authentication system such that you can integrate Optimize with your single sign on system."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature allows you to register your own hook into the Optimize authentication system such that you can
-integrate Optimize with your single sign on system. This allows you to skip the log in via the Optimize interface.
-
-For that, the Optimize plugin system provides the following interface:
-
-```java
-public interface AuthenticationExtractor {
-
- AuthenticationResult extractAuthenticatedUser(HttpServletRequest servletRequest);
-}
-```
-
-Implement this interface to extract your custom auth header from the JAX-RS servlet request, which is represented by `servletRequest`.
-With the given request you are able to extract your information both from the request header and from the request cookies.
-
-The following example extracts a header with the name `user` and if the header exists, the user name from the header is authenticated:
-
-```java
-package com.example.optimize.security.authentication;
-
-import org.camunda.optimize.plugin.security.authentication.AuthenticationExtractor;
-import org.camunda.optimize.plugin.security.authentication.AuthenticationResult;
-
-import jakarta.servlet.http.HttpServletRequest;
-
-public class AutomaticallySignInUserFromHeaderPlugin implements AuthenticationExtractor {
-
- @Override
- public AuthenticationResult extractAuthenticatedUser(HttpServletRequest servletRequest) {
- String userToAuthenticate = servletRequest.getHeader("user");
- AuthenticationResult result = new AuthenticationResult();
- result.setAuthenticatedUser(userToAuthenticate);
- result.setAuthenticated(userToAuthenticate != null);
- return result;
- }
-}
-```
-
-Similar to the other plugins' setup, you have to package your plugin in a `jar`, add it to Optimize's `plugin` folder, and make Optimize find it by adding the following configuration to `environment-config.yaml`:
-
-```yaml
-plugin:
- authenticationExtractor:
- # Looks in the given base package list for authentication extractor plugins.
- # If empty, the standard Optimize authentication mechanism is used.
- basePackages: ["com.example.optimize.security.authentication"]
-```
-
-For more information and example implementations, have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples#getting-started-with-sso-plugins).
diff --git a/optimize/self-managed/optimize-deployment/plugins/variable-import-plugin.md b/optimize/self-managed/optimize-deployment/plugins/variable-import-plugin.md
deleted file mode 100644
index e049c30d77c..00000000000
--- a/optimize/self-managed/optimize-deployment/plugins/variable-import-plugin.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-id: variable-import-plugin
-title: "Variable import customization"
-description: "Enrich or filter the variable import so you can customize which and how variables are imported to Optimize."
----
-
-Camunda 7 only
-
-Before implementing the plugin, make sure that you have [set up your environment](./plugin-system.md#set-up-your-environment).
-
-This feature enables you to enrich or filter the variable import, e.g., if variables in Camunda contain IDs of variables in another database and you would like to resolve those references to the actual values.
-
-The Optimize plugin system contains the following interface:
-
-```java
-public interface VariableImportAdapter {
-
- List adaptVariables(List variables);
-}
-```
-
-Implement this to adjust the variables to be imported. Given is a list of variables that would be imported if no further action is performed. The returned list is the customized list with the enriched/filtered variables that will be imported. To create new variable instances, you can use the `PluginVariableDto` class as data transfer object (DTO), which is also contained in the plugin system.
-
-:::note
-All DTO class members need to be set in order, otherwise the variable is ignored, as this may lead to problems during data analysis.
-
-The data from the engine is imported in batches. This means the `adaptVariables` method is called once per batch rather than once for all data. For instance, if you have 100,000 variables in total and the batch size is 10,000, the plugin function will be called 10 times.
-:::
-
-The following shows an example of a customization of the variable import in the package `optimize.plugin`, where every string variable is assigned the value 'foo':
-
-```java
-package org.mycompany.optimize.plugin;
-
-import org.camunda.optimize.plugin.importing.variable.PluginVariableDto;
-import org.camunda.optimize.plugin.importing.variable.VariableImportAdapter;
-
-import java.util.List;
-
- public class MyCustomVariableImportAdapter implements VariableImportAdapter {
-
- @Override
- public List adaptVariables(List list) {
- for (PluginVariableDto pluginVariableDto : list) {
- if(pluginVariableDto.getType().toLowerCase().equals("string")) {
- pluginVariableDto.setValue("foo");
- }
- }
- return list;
- }
-
-}
-```
-
-Now when `MyCustomVariableImportAdapter`, packaged as a `jar` file, is added to Optimize's `plugin` folder, we just have to add the following property to the `environment-config.yaml` file to make the plugin work:
-
-```yaml
-plugin:
- variableImport:
- # Look in the given base package list for variable import adaption plugins.
- # If empty, the import is not influenced.
- basePackages: ["org.mycompany.optimize.plugin"]
-```
-
-For more information and example implementations, have a look at the [Optimize Examples Repository](https://github.com/camunda/camunda-optimize-examples#getting-started-with-variable-import-plugins).
diff --git a/optimize/self-managed/optimize-deployment/reimport.md b/optimize/self-managed/optimize-deployment/reimport.md
deleted file mode 100644
index 5b1007df00e..00000000000
--- a/optimize/self-managed/optimize-deployment/reimport.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-id: reimport
-title: "Camunda engine data reimport"
-description: "Find out how to reimport Camunda engine data without losing your reports and dashboards."
----
-
-Camunda 7 only
-
-There are cases where you might want to remove all Camunda 7 engine data from Optimize which has been imported from connected Camunda engines but don't want to lose Optimize entities such as collections, reports, or dashboards you created.
-
-:::note Warning!
-Triggering a reimport causes the current data imported from the engine to be deleted and a new import cycle to be started. That also means that data which has already been removed from the engine (e.g. using the [history cleanup feature](https://docs.camunda.org/manual/latest/user-guide/process-engine/history/#history-cleanup)) is irreversibly lost.
-
-When triggering a reimport, all existing event-based processes get unpublished and reset to the `mapped` state. This is due to the fact that event-based processes may include Camunda engine data, yet the reimport does not take into account which sources event-based processes are actually based on and as such clears the data for all of them.
-
-You then have to manually publish event-based processes after you have restarted Optimize.
-:::
-
-:::note
-Engine data reimport is only available when using Optimize with ElasticSearch as a database.
-:::
-To reimport engine data, perform the following
-steps:
-
-1. Stop Optimize, but keep Elasticsearch running (hint: to only start Elasticsearch without Optimize, you can use `elasticsearch-startup.sh` or `elasticsearch-startup.bat` scripts).
-2. From the Optimize installation root run `./reimport/reimport.sh` on Linux or `reimport/reimport.bat` on Windows and wait for it to finish
-
- - In Docker environments, you can override the command the container executes on start to call the reimport script, e.g. in [docker-compose](https://docs.docker.com/compose/) this could look like the following:
-
- ```
- version: '2.4'
-
- services:
- optimize:
- image: registry.camunda.cloud/optimize-ee/optimize:latest
- command: ["./reimport/reimport.sh"]
- ```
-
-3. Start Optimize again. Optimize will now import all the engine data from scratch.
-4. If you made use of event-based processes you will have to manually publish them again.
diff --git a/optimize/self-managed/optimize-deployment/version-policy.md b/optimize/self-managed/optimize-deployment/version-policy.md
index 584c8e9325e..c4fed17fcad 100644
--- a/optimize/self-managed/optimize-deployment/version-policy.md
+++ b/optimize/self-managed/optimize-deployment/version-policy.md
@@ -4,6 +4,8 @@ title: "Version policy"
description: "Learn about the versioning policy for Camunda Optimize."
---
+<-- Does this need to be separate from the [C8 release policy](https://docs.camunda.io/docs/reference/release-policy/)? -->
+
Camunda Optimize versions are denoted as X.Y.Z as well as by an optional [pre-release](https://semver.org/spec/v2.0.0.html#spec-item-9) version being either '-alpha-[0-9]' or '-preview-[0-9]'. X is the [major version](https://semver.org/spec/v2.0.0.html#spec-item-4), Y is the [minor version](https://semver.org/spec/v2.0.0.html#spec-item-7), Z is the [patch version](https://semver.org/spec/v2.0.0.html#spec-item-6) as defined by the [Semantic Versioning 2.0.0](https://semver.org/spec/v2.0.0.html) specification.
## Release cadence
diff --git a/optimize_sidebars.js b/optimize_sidebars.js
index fc2214ed09e..ce96e9a90c3 100644
--- a/optimize_sidebars.js
+++ b/optimize_sidebars.js
@@ -376,10 +376,6 @@ module.exports = {
"Defining templates",
"components/modeler/desktop-modeler/element-templates/defining-templates/"
),
- docsLink(
- "Defining templates in Camunda 7",
- "components/modeler/desktop-modeler/element-templates/c7-defining-templates/"
- ),
docsLink(
"Additional resources",
"components/modeler/desktop-modeler/element-templates/additional-resources/"
@@ -1234,7 +1230,6 @@ module.exports = {
],
},
"components/userguide/creating-reports",
- "components/userguide/combined-process-reports",
"components/userguide/process-KPIs",
{
@@ -1275,18 +1270,9 @@ module.exports = {
],
},
- {
- "Decision analysis": [
- "components/userguide/decision-analysis/decision-analysis-overview",
- "components/userguide/decision-analysis/decision-report",
- "components/userguide/decision-analysis/decision-filter",
- ],
- },
-
{
"Additional features": [
"components/userguide/additional-features/alerts",
- "components/userguide/additional-features/event-based-processes",
"components/userguide/additional-features/export-import",
"components/userguide/additional-features/footer",
"components/userguide/additional-features/variable-labeling",
@@ -1418,51 +1404,6 @@ module.exports = {
),
],
},
-
- {
- "Camunda 7 specific": [
- docsLink(
- "Deciding about your Camunda 7 stack",
- "components/best-practices/architecture/deciding-about-your-stack-c7/"
- ),
- docsLink(
- "Sizing your Camunda 7 environment",
- "components/best-practices/architecture/sizing-your-environment-c7/"
- ),
- docsLink(
- "Invoking services from a Camunda 7 process",
- "components/best-practices/development/invoking-services-from-the-process-c7/"
- ),
- docsLink(
- "Understanding Camunda 7 transaction handling",
- "components/best-practices/development/understanding-transaction-handling-c7/"
- ),
- docsLink(
- "Testing process definitions in Camunda 7",
- "components/best-practices/development/testing-process-definitions-c7/"
- ),
- docsLink(
- "Routing events to processes in Camunda 7",
- "components/best-practices/development/routing-events-to-processes-c7/"
- ),
- docsLink(
- "Operating Camunda 7",
- "components/best-practices/operations/operating-camunda-c7/"
- ),
- docsLink(
- "Performance tuning Camunda 7",
- "components/best-practices/operations/performance-tuning-camunda-c7/"
- ),
- docsLink(
- "Securing Camunda 7",
- "components/best-practices/operations/securing-camunda-c7/"
- ),
- docsLink(
- "Extending human task management in Camunda 7",
- "components/best-practices/architecture/extending-human-task-management-c7/"
- ),
- ],
- },
],
},
],
@@ -1832,10 +1773,6 @@ module.exports = {
{
DecisionDefinition: [
- docsLink(
- "Search decision definitions",
- "apis-tools/operate-api/specifications/search-7/"
- ),
docsLink(
"Get decision definition by key",
"apis-tools/operate-api/specifications/by-key-6/"
@@ -1969,7 +1906,6 @@ module.exports = {
"apis-tools/optimize-api/report/get-data-export",
],
},
- "apis-tools/optimize-api/event-ingestion",
"apis-tools/optimize-api/external-variable-ingestion",
"apis-tools/optimize-api/health-readiness",
"apis-tools/optimize-api/import-entities",
@@ -2971,40 +2907,18 @@ module.exports = {
"System configuration": [
"self-managed/optimize-deployment/configuration/system-configuration",
"self-managed/optimize-deployment/configuration/system-configuration-platform-8",
- "self-managed/optimize-deployment/configuration/system-configuration-platform-7",
- "self-managed/optimize-deployment/configuration/event-based-process-configuration",
],
},
"self-managed/optimize-deployment/configuration/logging",
- "self-managed/optimize-deployment/configuration/optimize-license",
"self-managed/optimize-deployment/configuration/security-instructions",
"self-managed/optimize-deployment/configuration/shared-elasticsearch-cluster",
"self-managed/optimize-deployment/configuration/history-cleanup",
"self-managed/optimize-deployment/configuration/localization",
"self-managed/optimize-deployment/configuration/object-variables",
- "self-managed/optimize-deployment/configuration/clustering",
- "self-managed/optimize-deployment/configuration/webhooks",
- "self-managed/optimize-deployment/configuration/authorization-management",
- "self-managed/optimize-deployment/configuration/user-management",
"self-managed/optimize-deployment/configuration/multi-tenancy",
- "self-managed/optimize-deployment/configuration/multiple-engines",
- "self-managed/optimize-deployment/configuration/setup-event-based-processes",
"self-managed/optimize-deployment/configuration/common-problems",
],
},
-
- {
- Plugins: [
- "self-managed/optimize-deployment/plugins/plugin-system",
- "self-managed/optimize-deployment/plugins/businesskey-import-plugin",
- "self-managed/optimize-deployment/plugins/decision-import-plugin",
- "self-managed/optimize-deployment/plugins/elasticsearch-header",
- "self-managed/optimize-deployment/plugins/engine-rest-filter-plugin",
- "self-managed/optimize-deployment/plugins/single-sign-on",
- "self-managed/optimize-deployment/plugins/variable-import-plugin",
- ],
- },
- "self-managed/optimize-deployment/reimport",
{
"Migration & update": [
{
@@ -3019,31 +2933,6 @@ module.exports = {
"self-managed/optimize-deployment/migration-update/camunda-8/3.8-to-3.9-preview-1",
"self-managed/optimize-deployment/migration-update/camunda-8/3.7-to-3.8",
],
- "Camunda 7": [
- "self-managed/optimize-deployment/migration-update/camunda-7/instructions",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.13-to-3.14",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.12-to-3.13",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.11-to-3.12",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.9-preview-1-to-3.9",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.8-to-3.9-preview-1",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.7-to-3.8",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.6-to-3.7",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.5-to-3.6",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.4-to-3.5",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.3-to-3.4",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.2-to-3.3",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.1-to-3.2",
- "self-managed/optimize-deployment/migration-update/camunda-7/3.0-to-3.1",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.6-to-2.7",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.5-to-2.6",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.4-to-2.5",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.3-to-2.4",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.2-to-2.3",
- "self-managed/optimize-deployment/migration-update/camunda-7/2.1-to-2.2",
- ],
},
],
},
@@ -3051,7 +2940,6 @@ module.exports = {
{
"Advanced features": [
"self-managed/optimize-deployment/advanced-features/engine-data-deletion",
- "self-managed/optimize-deployment/advanced-features/import-guide",
],
},
],
diff --git a/optimize_versioned_docs/version-3.11.0/apis-tools/optimize-api/event-ingestion.md b/optimize_versioned_docs/version-3.11.0/apis-tools/optimize-api/event-ingestion.md
index 952e44ec10b..3585ec5770d 100644
--- a/optimize_versioned_docs/version-3.11.0/apis-tools/optimize-api/event-ingestion.md
+++ b/optimize_versioned_docs/version-3.11.0/apis-tools/optimize-api/event-ingestion.md
@@ -6,7 +6,7 @@ description: "The REST API to ingest external events into Optimize."
Camunda 7 only
-The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](components/userguide/additional-features/event-based-processes.md) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
+The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](#) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
## Functionality
@@ -45,18 +45,18 @@ The following request headers have to be provided with every ingest request:
[JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) compliant JSON Array of CloudEvent JSON Objects:
-| Name | Type | Constraints | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
-| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
-| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
-| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
-| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) instance later on. |
-| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](components/userguide/additional-features/event-based-processes.md#external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
+| Name | Type | Constraints | Description |
+| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
+| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
+| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
+| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
+| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](#) instance later on. |
+| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](##external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
-The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) where an event containing this as `data` was mapped:
+The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](#) where an event containing this as `data` was mapped:
```
{
@@ -85,14 +85,14 @@ This method returns no content.
Possible HTTP response status codes:
-| Code | Description |
-| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 204 | Request successful |
-| 400 | Returned if some of the properties in the request body are invalid or missing. |
-| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
-| 403 | The Event Based Process feature is not enabled. |
-| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](self-managed/optimize-deployment/configuration/event-based-processes.md#event-ingestion-rest-api-configuration) for information on how to configure this limit. |
-| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
+| Code | Description |
+| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 204 | Request successful |
+| 400 | Returned if some of the properties in the request body are invalid or missing. |
+| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
+| 403 | The Event Based Process feature is not enabled. |
+| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](##event-ingestion-rest-api-configuration) for information on how to configure this limit. |
+| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
## Example
diff --git a/optimize_versioned_docs/version-3.11.0/components/userguide/additional-features/event-based-processes.md b/optimize_versioned_docs/version-3.11.0/components/userguide/additional-features/event-based-processes.md
index f8198a5fd61..3b680c17934 100644
--- a/optimize_versioned_docs/version-3.11.0/components/userguide/additional-features/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.11.0/components/userguide/additional-features/event-based-processes.md
@@ -15,12 +15,12 @@ Once the event-based process feature is correctly configured, you will see a new
:::note
When Camunda activity events are used in event-based processes, Camunda admin authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](#publishing-an-event-based-process) or at any time via the [edit access option](#event-based-process-list---edit-access) in the event-based process list.
-Visit our [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) on authorization management and event-based processes for the reasoning behind this behavior.
+Visit our [technical guide](/#) on authorization management and event-based processes for the reasoning behind this behavior.
:::
## Set up
-You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) for more information.
+You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/#) for more information.
## Event-based process list
@@ -104,7 +104,7 @@ Defining the `group` property when ingesting the events will allow selecting eve
These are events generated from an existing Camunda BPMN process. Only processes for which Optimize has imported at least one event will be visible for selection. This means the process has to have at least one instance and Optimize has to have been configured to import data from that process.
-See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md#use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
+See the [technical guide](/##use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
To add such events, provide the following details:
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/authorization-management.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/authorization-management.md
index 7058a6ba28c..00d8786097f 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/authorization-management.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/authorization-management.md
@@ -6,7 +6,7 @@ description: "Define which data users are authorized to see."
Camunda 7 only
-User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](components/userguide/additional-features/event-based-processes.md) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
+User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](#) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
## Camunda 7 data authorizations
@@ -50,4 +50,4 @@ There are entities that only exist in Camunda Optimize and authorizations to the
Camunda 7 only
-Although [event-based processes](components/userguide/additional-features/event-based-processes.md) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](components/userguide/additional-features/event-based-processes.md#event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](components/userguide/additional-features/event-based-processes.md) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](components/userguide/additional-features/event-based-processes.md) either directly when [publishing](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) option.
+Although [event-based processes](#) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](##event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](#) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](#) either directly when [publishing](##publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](##event-based-process-list---edit-access) option.
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/clustering.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/clustering.md
index de22429c75c..81ea7065ff3 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/clustering.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/clustering.md
@@ -65,9 +65,9 @@ The importing instance has the [history cleanup enabled](./system-configuration.
In the context of event-based process import and clustering, there are two additional configuration properties to consider carefully.
-One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](components/userguide/additional-features/event-based-processes.md). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
+One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](#). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
-[`eventBasedProcess.eventImport.enabled`](./setup-event-based-processes.md) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
+[`eventBasedProcess.eventImport.enabled`](#) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
### 2. Distributed user sessions - configure shared secret token
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/common-problems.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/common-problems.md
index 8fb78a97860..ff425d78843 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/common-problems.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/common-problems.md
@@ -8,7 +8,7 @@ This section aims to provide initial help to troubleshoot common issues. This gu
## Optimize is missing some or all definitions
-It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](./authorization-management.md#process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
+It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](##process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
Another common cause for this type of problem are issues with Optimize's data import, for example due to underlying problems with the engine data. In this case, the Optimize logs should contain more information on what is causing Optimize to not import the definition data correctly. If you are unsure on how to interpret what you find in the logs, create a support ticket.
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/event-based-processes.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/event-based-processes.md
index 884e804160f..30d927a052e 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/event-based-processes.md
@@ -21,7 +21,7 @@ Configuration of the Optimize event based process feature.
Camunda 7 only
-Configuration of the Optimize [Event Ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
+Configuration of the Optimize [Event Ingestion REST API](#) for [event-based processes](#).
| YAML Path | Default Value | Description |
| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/history-cleanup.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/history-cleanup.md
index e945fc9e325..3363997fc80 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/history-cleanup.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/history-cleanup.md
@@ -86,7 +86,7 @@ historyCleanup:
### Ingested event cleanup
-The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.
+The age of ingested event data is determined by the [`time`](##request-body) field provided for each event at the time of ingestion.
To enable the cleanup of event data, the `historyCleanup.ingestedEventCleanup.enabled` property needs to be set to `true`.
@@ -98,7 +98,7 @@ historyCleanup:
```
:::note
-The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
+The ingested event cleanup does not cascade down to potentially existing [event-based processes](#) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::
## Example
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/multiple-engines.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/multiple-engines.md
index f1ba7216737..0ecae650f26 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/multiple-engines.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/multiple-engines.md
@@ -80,7 +80,7 @@ In general, tests have shown that Optimize puts a very low strain on the engine
## Authentication and authorization in the multiple engine setup
When you configure multiple engines in Optimize, each process engine can host different users with a different set of authorizations. If a user is logging in, Optimize will try to authenticate and authorize the user on each configured engine. In case you are not familiar with how
-the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](./authorization-management.md) documentation first.
+the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](#) documentation first.
To determine if a user is allowed to log in and which resources they are allowed to access within the multiple engine scenario, Optimize uses the following algorithm:
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/security-instructions.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/security-instructions.md
index a1b64350563..4d319230e49 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/security-instructions.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/security-instructions.md
@@ -42,7 +42,7 @@ Authentication controls who can access Optimize. Read all about how to restrict
Camunda 7 only
-Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](./authorization-management.md).
+Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](#).
## Secure Elasticsearch
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
index fd4d98ac99f..d9f2f2243db 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
@@ -33,12 +33,12 @@ A full configuration example authorizing the user `demo` and all members of the
## Use Camunda activity event sources for event based processes
:::note Authorization to event-based processes
-When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) or at any time via the [Edit Access Option](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) in the event-based process List.
+When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](##publishing-an-event-based-process) or at any time via the [Edit Access Option](##event-based-process-list---edit-access) in the event-based process List.
-Visit [Authorization Management - event-based process](./authorization-management.md#event-based-processes) for the reasoning behind this behavior.
+Visit [Authorization Management - event-based process](##event-based-processes) for the reasoning behind this behavior.
:::
-To publish event-based processes that include [Camunda Event Sources](components/userguide/additional-features/event-based-processes.md#camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
+To publish event-based processes that include [Camunda Event Sources](##camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
:::note Heads Up!
You need to [reimport data](./../migration-update/instructions.md#force-reimport-of-engine-data-in-optimize) from this engine to have all historic Camunda events available for event-based processes. Otherwise, only new events will be included.
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/system-configuration.md
index 5631a0cfe74..02aa63d2af4 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/system-configuration.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/system-configuration.md
@@ -88,8 +88,8 @@ These values control mechanisms of Optimize related security, e.g. security head
| |
| security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
| security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64-character secret. If set to `null` a random secret will be generated with each startup of Optimize. |
-| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
-| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
+| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
+| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
| security.responseHeaders.HSTS.max-age | 63072000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. This field defines the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
| security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all the site’s subdomains as well. |
| security.responseHeaders.X-XSS-Protection | 1; mode=block | This header enables the cross-site scripting (XSS) filter in your browser. Can have one of the following options:- `0`: Filter disabled.
- `1`: Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.
- `1; mode=block`: Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.
- `1; report=http://[YOURDOMAIN]/your_report_URI`: Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
|
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/user-management.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/user-management.md
index 001faa9d4cc..7dbdedb0485 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/user-management.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/configuration/user-management.md
@@ -10,7 +10,7 @@ description: "Define which users have access to Optimize."
Providing Optimize access to a user just enables them to log in to Optimize. To be able
to create reports, the user also needs to have permission to access the engine data. To see
-how this can be done, refer to the [Authorization Management](./authorization-management.md) section.
+how this can be done, refer to the [Authorization Management](#) section.
:::
You can use the credentials from the Camunda 7 users to access Optimize. However, for the users to gain access to Optimize, they need to be authorized. This is not done in Optimize itself, but needs to be configured in the Camunda 7 and can be achieved on different levels with different options. If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/).
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/install-and-start.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/install-and-start.md
index 21a90569e3c..2d7fc9fc4e4 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/install-and-start.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/install-and-start.md
@@ -121,7 +121,7 @@ The most important environment variables you may have to configure are related t
A complete sample can be found within [Connect to remote Camunda 7 and Elasticsearch](#connect-to-remote-camunda-platform-7-and-elasticsearch).
-Furthermore, there are also environment variables specific to the [event-based process](components/userguide/additional-features/event-based-processes.md) feature you may make use of:
+Furthermore, there are also environment variables specific to the [event-based process](#) feature you may make use of:
- `OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED`: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default: `false`)
- `OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS`: An array of user ids that are authorized to administer event-based processes (default: `[]`)
diff --git a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
index aa3df00aabe..58477b9b4e1 100644
--- a/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
+++ b/optimize_versioned_docs/version-3.11.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
@@ -44,7 +44,7 @@ The update should now successfully complete.
### Cannot disable import from particular engine
-In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./../configuration/clustering.md), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
+In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](#), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
## Limitations
diff --git a/optimize_versioned_docs/version-3.12.0/apis-tools/optimize-api/event-ingestion.md b/optimize_versioned_docs/version-3.12.0/apis-tools/optimize-api/event-ingestion.md
index 22ab6d51022..0302f63704d 100644
--- a/optimize_versioned_docs/version-3.12.0/apis-tools/optimize-api/event-ingestion.md
+++ b/optimize_versioned_docs/version-3.12.0/apis-tools/optimize-api/event-ingestion.md
@@ -6,7 +6,7 @@ description: "The REST API to ingest external events into Optimize."
Camunda 7 only
-The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](components/userguide/additional-features/event-based-processes.md) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
+The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](#) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
## Functionality
@@ -45,18 +45,18 @@ The following request headers have to be provided with every ingest request:
[JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) compliant JSON Array of CloudEvent JSON Objects:
-| Name | Type | Constraints | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
-| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
-| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
-| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
-| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) instance later on. |
-| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](components/userguide/additional-features/event-based-processes.md#external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
+| Name | Type | Constraints | Description |
+| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
+| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
+| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
+| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
+| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](#) instance later on. |
+| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](##external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
-The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) where an event containing this as `data` was mapped:
+The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](#) where an event containing this as `data` was mapped:
```
{
@@ -85,14 +85,14 @@ This method returns no content.
Possible HTTP response status codes:
-| Code | Description |
-| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 204 | Request successful |
-| 400 | Returned if some of the properties in the request body are invalid or missing. |
-| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
-| 403 | The Event Based Process feature is not enabled. |
-| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](self-managed/optimize-deployment/configuration/event-based-processes.md#event-ingestion-rest-api-configuration) for information on how to configure this limit. |
-| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
+| Code | Description |
+| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 204 | Request successful |
+| 400 | Returned if some of the properties in the request body are invalid or missing. |
+| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
+| 403 | The Event Based Process feature is not enabled. |
+| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](##event-ingestion-rest-api-configuration) for information on how to configure this limit. |
+| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
## Example
diff --git a/optimize_versioned_docs/version-3.12.0/components/userguide/additional-features/event-based-processes.md b/optimize_versioned_docs/version-3.12.0/components/userguide/additional-features/event-based-processes.md
index f8198a5fd61..3b680c17934 100644
--- a/optimize_versioned_docs/version-3.12.0/components/userguide/additional-features/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.12.0/components/userguide/additional-features/event-based-processes.md
@@ -15,12 +15,12 @@ Once the event-based process feature is correctly configured, you will see a new
:::note
When Camunda activity events are used in event-based processes, Camunda admin authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](#publishing-an-event-based-process) or at any time via the [edit access option](#event-based-process-list---edit-access) in the event-based process list.
-Visit our [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) on authorization management and event-based processes for the reasoning behind this behavior.
+Visit our [technical guide](/#) on authorization management and event-based processes for the reasoning behind this behavior.
:::
## Set up
-You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) for more information.
+You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/#) for more information.
## Event-based process list
@@ -104,7 +104,7 @@ Defining the `group` property when ingesting the events will allow selecting eve
These are events generated from an existing Camunda BPMN process. Only processes for which Optimize has imported at least one event will be visible for selection. This means the process has to have at least one instance and Optimize has to have been configured to import data from that process.
-See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md#use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
+See the [technical guide](/##use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
To add such events, provide the following details:
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/authorization-management.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/authorization-management.md
index d26c080356e..b3cc86dc2e9 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/authorization-management.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/authorization-management.md
@@ -6,7 +6,7 @@ description: "Define which data users are authorized to see."
Camunda 7 only
-User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](components/userguide/additional-features/event-based-processes.md) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
+User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](#) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
## Camunda 7 data authorizations
@@ -50,4 +50,4 @@ There are entities that only exist in Camunda Optimize and authorizations to the
Camunda 7 only
-Although [event-based processes](components/userguide/additional-features/event-based-processes.md) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](components/userguide/additional-features/event-based-processes.md#event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](components/userguide/additional-features/event-based-processes.md) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](components/userguide/additional-features/event-based-processes.md) either directly when [publishing](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) option.
+Although [event-based processes](#) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](##event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](#) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](#) either directly when [publishing](##publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](##event-based-process-list---edit-access) option.
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/clustering.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/clustering.md
index 752e83f08fc..63c01b06b24 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/clustering.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/clustering.md
@@ -65,9 +65,9 @@ The importing instance has the [history cleanup enabled](./system-configuration.
In the context of event-based process import and clustering, there are two additional configuration properties to consider carefully.
-One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](components/userguide/additional-features/event-based-processes.md). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
+One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](#). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
-[`eventBasedProcess.eventImport.enabled`](./setup-event-based-processes.md) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
+[`eventBasedProcess.eventImport.enabled`](#) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
### 2. Distributed user sessions - configure shared secret token
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/common-problems.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/common-problems.md
index 8fb78a97860..ff425d78843 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/common-problems.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/common-problems.md
@@ -8,7 +8,7 @@ This section aims to provide initial help to troubleshoot common issues. This gu
## Optimize is missing some or all definitions
-It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](./authorization-management.md#process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
+It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](##process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
Another common cause for this type of problem are issues with Optimize's data import, for example due to underlying problems with the engine data. In this case, the Optimize logs should contain more information on what is causing Optimize to not import the definition data correctly. If you are unsure on how to interpret what you find in the logs, create a support ticket.
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/event-based-processes.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/event-based-processes.md
index efc9f48f2f4..c8a0b781a7e 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/event-based-processes.md
@@ -21,7 +21,7 @@ Configuration of the Optimize event-based process feature.
Camunda 7 only
-Configuration of the Optimize [event ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
+Configuration of the Optimize [event ingestion REST API](#) for [event-based processes](#).
| YAML path | Default value | Description |
| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/history-cleanup.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/history-cleanup.md
index 773b1824d82..62df0229dad 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/history-cleanup.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/history-cleanup.md
@@ -101,7 +101,7 @@ historyCleanup:
-The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.
+The age of ingested event data is determined by the [`time`](##request-body) field provided for each event at the time of ingestion.
To enable the cleanup of event data, the `historyCleanup.ingestedEventCleanup.enabled` property needs to be set to `true`.
@@ -113,7 +113,7 @@ historyCleanup:
```
:::note
-The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
+The ingested event cleanup does not cascade down to potentially existing [event-based processes](#) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/multiple-engines.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/multiple-engines.md
index f1ba7216737..0ecae650f26 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/multiple-engines.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/multiple-engines.md
@@ -80,7 +80,7 @@ In general, tests have shown that Optimize puts a very low strain on the engine
## Authentication and authorization in the multiple engine setup
When you configure multiple engines in Optimize, each process engine can host different users with a different set of authorizations. If a user is logging in, Optimize will try to authenticate and authorize the user on each configured engine. In case you are not familiar with how
-the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](./authorization-management.md) documentation first.
+the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](#) documentation first.
To determine if a user is allowed to log in and which resources they are allowed to access within the multiple engine scenario, Optimize uses the following algorithm:
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/security-instructions.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/security-instructions.md
index e8884e7055d..447c924effe 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/security-instructions.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/security-instructions.md
@@ -55,7 +55,7 @@ Authentication controls who can access Optimize. Read all about how to restrict
Camunda 7 only
-Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](./authorization-management.md).
+Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](#).
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
index fd4d98ac99f..d9f2f2243db 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
@@ -33,12 +33,12 @@ A full configuration example authorizing the user `demo` and all members of the
## Use Camunda activity event sources for event based processes
:::note Authorization to event-based processes
-When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) or at any time via the [Edit Access Option](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) in the event-based process List.
+When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](##publishing-an-event-based-process) or at any time via the [Edit Access Option](##event-based-process-list---edit-access) in the event-based process List.
-Visit [Authorization Management - event-based process](./authorization-management.md#event-based-processes) for the reasoning behind this behavior.
+Visit [Authorization Management - event-based process](##event-based-processes) for the reasoning behind this behavior.
:::
-To publish event-based processes that include [Camunda Event Sources](components/userguide/additional-features/event-based-processes.md#camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
+To publish event-based processes that include [Camunda Event Sources](##camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
:::note Heads Up!
You need to [reimport data](./../migration-update/instructions.md#force-reimport-of-engine-data-in-optimize) from this engine to have all historic Camunda events available for event-based processes. Otherwise, only new events will be included.
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/system-configuration.md
index 8430a31cf7b..facefd3b798 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/system-configuration.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/system-configuration.md
@@ -88,8 +88,8 @@ These values control mechanisms of Optimize related security, e.g. security head
| |
| security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
| security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64-character secret. If set to `null` a random secret will be generated with each startup of Optimize. |
-| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
-| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
+| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
+| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
| security.responseHeaders.HSTS.max-age | 63072000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. This field defines the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
| security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all the site’s subdomains as well. |
| security.responseHeaders.X-XSS-Protection | 1; mode=block | This header enables the cross-site scripting (XSS) filter in your browser. Can have one of the following options:- `0`: Filter disabled.
- `1`: Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.
- `1; mode=block`: Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.
- `1; report=http://[YOURDOMAIN]/your_report_URI`: Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
|
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/user-management.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/user-management.md
index 001faa9d4cc..7dbdedb0485 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/user-management.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/configuration/user-management.md
@@ -10,7 +10,7 @@ description: "Define which users have access to Optimize."
Providing Optimize access to a user just enables them to log in to Optimize. To be able
to create reports, the user also needs to have permission to access the engine data. To see
-how this can be done, refer to the [Authorization Management](./authorization-management.md) section.
+how this can be done, refer to the [Authorization Management](#) section.
:::
You can use the credentials from the Camunda 7 users to access Optimize. However, for the users to gain access to Optimize, they need to be authorized. This is not done in Optimize itself, but needs to be configured in the Camunda 7 and can be achieved on different levels with different options. If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/).
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/install-and-start.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/install-and-start.md
index 9c1653d9992..95d7eb1818d 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/install-and-start.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/install-and-start.md
@@ -125,7 +125,7 @@ The most important environment variables you may have to configure are related t
A complete sample can be found within [Connect to remote Camunda 7 and Elasticsearch](#connect-to-remote-camunda-platform-7-and-elasticsearch).
-Furthermore, there are also environment variables specific to the [event-based process](components/userguide/additional-features/event-based-processes.md) feature you may make use of:
+Furthermore, there are also environment variables specific to the [event-based process](#) feature you may make use of:
- `OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED`: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default: `false`)
- `OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS`: An array of user ids that are authorized to administer event-based processes (default: `[]`)
diff --git a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
index aa3df00aabe..58477b9b4e1 100644
--- a/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
+++ b/optimize_versioned_docs/version-3.12.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
@@ -44,7 +44,7 @@ The update should now successfully complete.
### Cannot disable import from particular engine
-In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./../configuration/clustering.md), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
+In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](#), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
## Limitations
diff --git a/optimize_versioned_docs/version-3.13.0/apis-tools/optimize-api/event-ingestion.md b/optimize_versioned_docs/version-3.13.0/apis-tools/optimize-api/event-ingestion.md
index 22ab6d51022..0302f63704d 100644
--- a/optimize_versioned_docs/version-3.13.0/apis-tools/optimize-api/event-ingestion.md
+++ b/optimize_versioned_docs/version-3.13.0/apis-tools/optimize-api/event-ingestion.md
@@ -6,7 +6,7 @@ description: "The REST API to ingest external events into Optimize."
Camunda 7 only
-The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](components/userguide/additional-features/event-based-processes.md) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
+The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](#) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
## Functionality
@@ -45,18 +45,18 @@ The following request headers have to be provided with every ingest request:
[JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) compliant JSON Array of CloudEvent JSON Objects:
-| Name | Type | Constraints | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
-| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
-| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
-| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
-| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) instance later on. |
-| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](components/userguide/additional-features/event-based-processes.md#external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
+| Name | Type | Constraints | Description |
+| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
+| [id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - id](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
+| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
+| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
+| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](#) instance later on. |
+| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](##external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
-The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) where an event containing this as `data` was mapped:
+The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](#) where an event containing this as `data` was mapped:
```
{
@@ -85,14 +85,14 @@ This method returns no content.
Possible HTTP response status codes:
-| Code | Description |
-| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 204 | Request successful |
-| 400 | Returned if some of the properties in the request body are invalid or missing. |
-| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
-| 403 | The Event Based Process feature is not enabled. |
-| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](self-managed/optimize-deployment/configuration/event-based-processes.md#event-ingestion-rest-api-configuration) for information on how to configure this limit. |
-| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
+| Code | Description |
+| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 204 | Request successful |
+| 400 | Returned if some of the properties in the request body are invalid or missing. |
+| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
+| 403 | The Event Based Process feature is not enabled. |
+| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](##event-ingestion-rest-api-configuration) for information on how to configure this limit. |
+| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
## Example
diff --git a/optimize_versioned_docs/version-3.13.0/components/userguide/additional-features/event-based-processes.md b/optimize_versioned_docs/version-3.13.0/components/userguide/additional-features/event-based-processes.md
index f8198a5fd61..3b680c17934 100644
--- a/optimize_versioned_docs/version-3.13.0/components/userguide/additional-features/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.13.0/components/userguide/additional-features/event-based-processes.md
@@ -15,12 +15,12 @@ Once the event-based process feature is correctly configured, you will see a new
:::note
When Camunda activity events are used in event-based processes, Camunda admin authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](#publishing-an-event-based-process) or at any time via the [edit access option](#event-based-process-list---edit-access) in the event-based process list.
-Visit our [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) on authorization management and event-based processes for the reasoning behind this behavior.
+Visit our [technical guide](/#) on authorization management and event-based processes for the reasoning behind this behavior.
:::
## Set up
-You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) for more information.
+You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/#) for more information.
## Event-based process list
@@ -104,7 +104,7 @@ Defining the `group` property when ingesting the events will allow selecting eve
These are events generated from an existing Camunda BPMN process. Only processes for which Optimize has imported at least one event will be visible for selection. This means the process has to have at least one instance and Optimize has to have been configured to import data from that process.
-See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md#use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
+See the [technical guide](/##use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
To add such events, provide the following details:
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/authorization-management.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/authorization-management.md
index d26c080356e..b3cc86dc2e9 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/authorization-management.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/authorization-management.md
@@ -6,7 +6,7 @@ description: "Define which data users are authorized to see."
Camunda 7 only
-User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](components/userguide/additional-features/event-based-processes.md) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
+User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](#) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
## Camunda 7 data authorizations
@@ -50,4 +50,4 @@ There are entities that only exist in Camunda Optimize and authorizations to the
Camunda 7 only
-Although [event-based processes](components/userguide/additional-features/event-based-processes.md) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](components/userguide/additional-features/event-based-processes.md#event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](components/userguide/additional-features/event-based-processes.md) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](components/userguide/additional-features/event-based-processes.md) either directly when [publishing](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) option.
+Although [event-based processes](#) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](##event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](#) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](#) either directly when [publishing](##publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](##event-based-process-list---edit-access) option.
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/clustering.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/clustering.md
index 752e83f08fc..63c01b06b24 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/clustering.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/clustering.md
@@ -65,9 +65,9 @@ The importing instance has the [history cleanup enabled](./system-configuration.
In the context of event-based process import and clustering, there are two additional configuration properties to consider carefully.
-One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](components/userguide/additional-features/event-based-processes.md). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
+One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](#). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
-[`eventBasedProcess.eventImport.enabled`](./setup-event-based-processes.md) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
+[`eventBasedProcess.eventImport.enabled`](#) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
### 2. Distributed user sessions - configure shared secret token
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/common-problems.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/common-problems.md
index 8fb78a97860..ff425d78843 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/common-problems.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/common-problems.md
@@ -8,7 +8,7 @@ This section aims to provide initial help to troubleshoot common issues. This gu
## Optimize is missing some or all definitions
-It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](./authorization-management.md#process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
+It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](##process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
Another common cause for this type of problem are issues with Optimize's data import, for example due to underlying problems with the engine data. In this case, the Optimize logs should contain more information on what is causing Optimize to not import the definition data correctly. If you are unsure on how to interpret what you find in the logs, create a support ticket.
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/event-based-processes.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/event-based-processes.md
index efc9f48f2f4..c8a0b781a7e 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/event-based-processes.md
@@ -21,7 +21,7 @@ Configuration of the Optimize event-based process feature.
Camunda 7 only
-Configuration of the Optimize [event ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
+Configuration of the Optimize [event ingestion REST API](#) for [event-based processes](#).
| YAML path | Default value | Description |
| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/history-cleanup.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/history-cleanup.md
index 773b1824d82..62df0229dad 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/history-cleanup.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/history-cleanup.md
@@ -101,7 +101,7 @@ historyCleanup:
-The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.
+The age of ingested event data is determined by the [`time`](##request-body) field provided for each event at the time of ingestion.
To enable the cleanup of event data, the `historyCleanup.ingestedEventCleanup.enabled` property needs to be set to `true`.
@@ -113,7 +113,7 @@ historyCleanup:
```
:::note
-The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
+The ingested event cleanup does not cascade down to potentially existing [event-based processes](#) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/multiple-engines.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/multiple-engines.md
index 10fb4c5074f..a2db8bb9dc0 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/multiple-engines.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/multiple-engines.md
@@ -80,7 +80,7 @@ In general, tests have shown that Optimize puts a very low strain on the engine
## Authentication and authorization in the multiple engine setup
When you configure multiple engines in Optimize, each process engine can host different users with a different set of authorizations. If a user is logging in, Optimize will try to authenticate and authorize the user on each configured engine. In case you are not familiar with how
-the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](./authorization-management.md) documentation first.
+the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](#) documentation first.
To determine if a user is allowed to log in and which resources they are allowed to access within the multiple engine scenario, Optimize uses the following algorithm:
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/security-instructions.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/security-instructions.md
index e8884e7055d..447c924effe 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/security-instructions.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/security-instructions.md
@@ -55,7 +55,7 @@ Authentication controls who can access Optimize. Read all about how to restrict
Camunda 7 only
-Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](./authorization-management.md).
+Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](#).
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
index fd4d98ac99f..d9f2f2243db 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
@@ -33,12 +33,12 @@ A full configuration example authorizing the user `demo` and all members of the
## Use Camunda activity event sources for event based processes
:::note Authorization to event-based processes
-When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) or at any time via the [Edit Access Option](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) in the event-based process List.
+When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](##publishing-an-event-based-process) or at any time via the [Edit Access Option](##event-based-process-list---edit-access) in the event-based process List.
-Visit [Authorization Management - event-based process](./authorization-management.md#event-based-processes) for the reasoning behind this behavior.
+Visit [Authorization Management - event-based process](##event-based-processes) for the reasoning behind this behavior.
:::
-To publish event-based processes that include [Camunda Event Sources](components/userguide/additional-features/event-based-processes.md#camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
+To publish event-based processes that include [Camunda Event Sources](##camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
:::note Heads Up!
You need to [reimport data](./../migration-update/instructions.md#force-reimport-of-engine-data-in-optimize) from this engine to have all historic Camunda events available for event-based processes. Otherwise, only new events will be included.
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/system-configuration.md
index a1ddfdd93f1..e641c435b0e 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/system-configuration.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/system-configuration.md
@@ -88,8 +88,8 @@ These values control mechanisms of Optimize related security, e.g. security head
| |
| security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
| security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64-character secret. If set to `null` a random secret will be generated with each startup of Optimize. |
-| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
-| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
+| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
+| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
| security.responseHeaders.HSTS.max-age | 63072000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. This field defines the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
| security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all the site’s subdomains as well. |
| security.responseHeaders.X-XSS-Protection | 1; mode=block | This header enables the cross-site scripting (XSS) filter in your browser. Can have one of the following options:- `0`: Filter disabled.
- `1`: Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.
- `1; mode=block`: Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.
- `1; report=http://[YOURDOMAIN]/your_report_URI`: Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
|
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/user-management.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/user-management.md
index 001faa9d4cc..7dbdedb0485 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/user-management.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/configuration/user-management.md
@@ -10,7 +10,7 @@ description: "Define which users have access to Optimize."
Providing Optimize access to a user just enables them to log in to Optimize. To be able
to create reports, the user also needs to have permission to access the engine data. To see
-how this can be done, refer to the [Authorization Management](./authorization-management.md) section.
+how this can be done, refer to the [Authorization Management](#) section.
:::
You can use the credentials from the Camunda 7 users to access Optimize. However, for the users to gain access to Optimize, they need to be authorized. This is not done in Optimize itself, but needs to be configured in the Camunda 7 and can be achieved on different levels with different options. If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/).
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/install-and-start.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/install-and-start.md
index 9bdf27e3864..3090dcedcfa 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/install-and-start.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/install-and-start.md
@@ -125,7 +125,7 @@ The most important environment variables you may have to configure are related t
A complete sample can be found within [Connect to remote Camunda 7 and Elasticsearch](#connect-to-remote-camunda-platform-7-and-elasticsearch).
-Furthermore, there are also environment variables specific to the [event-based process](components/userguide/additional-features/event-based-processes.md) feature you may make use of:
+Furthermore, there are also environment variables specific to the [event-based process](#) feature you may make use of:
- `OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED`: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default: `false`)
- `OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS`: An array of user ids that are authorized to administer event-based processes (default: `[]`)
diff --git a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
index aa3df00aabe..58477b9b4e1 100644
--- a/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
+++ b/optimize_versioned_docs/version-3.13.0/self-managed/optimize-deployment/migration-update/2.7-to-3.0.md
@@ -44,7 +44,7 @@ The update should now successfully complete.
### Cannot disable import from particular engine
-In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./../configuration/clustering.md), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
+In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](#), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
## Limitations
diff --git a/optimize_versioned_docs/version-3.14.0/apis-tools/optimize-api/event-ingestion.md b/optimize_versioned_docs/version-3.14.0/apis-tools/optimize-api/event-ingestion.md
index 6e43d73fd6c..40ce98b6ef6 100644
--- a/optimize_versioned_docs/version-3.14.0/apis-tools/optimize-api/event-ingestion.md
+++ b/optimize_versioned_docs/version-3.14.0/apis-tools/optimize-api/event-ingestion.md
@@ -6,7 +6,7 @@ description: "The REST API to ingest external events into Optimize."
Camunda 7 only
-The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](components/userguide/additional-features/event-based-processes.md) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
+The Event Ingestion REST API ingests business process related event data from any third-party system to Camunda Optimize. These events can then be correlated into an [event-based process](#) in Optimize to get business insights into business processes that are not yet fully modeled nor automated using Camunda 7.
## Functionality
@@ -45,18 +45,18 @@ The following request headers have to be provided with every ingest request:
[JSON Batch Format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md#4-json-batch-format) compliant JSON Array of CloudEvent JSON Objects:
-| Name | Type | Constraints | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
-| [ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
-| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
-| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
-| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) instance later on. |
-| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](components/userguide/additional-features/event-based-processes.md#external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
-| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
+| Name | Type | Constraints | Description |
+| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion) | String | REQUIRED | The version of the CloudEvents specification, which the event uses, must be `1.0`. See [CloudEvents - Version 1.0 - specversion](https://github.com/cloudevents/spec/blob/v1.0/spec.md#specversion). |
+| [ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id) | String | REQUIRED | Uniquely identifies an event, see [CloudEvents - Version 1.0 - ID](https://github.com/cloudevents/spec/blob/v1.0/spec.md#id). |
+| [source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1) | String | REQUIRED | Identifies the context in which an event happened, see [CloudEvents - Version 1.0 - source](https://github.com/cloudevents/spec/blob/v1.0/spec.md#source-1). A use-case could be if you have conflicting types across different sources. For example, a `type:OrderProcessed` originating from both `order-service` and `shipping-service`. In this case, the `source` field provides means to clearly separate between the origins of a particular event. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| [type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | String | REQUIRED | This attribute contains a value describing the type of event related to the originating occurrence, see [CloudEvents - Version 1.0 - type](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type). Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. The value `camunda` cannot be used for this field. |
+| [time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type) | [Timestamp](https://github.com/cloudevents/spec/blob/v1.0/spec.md#type-system) | OPTIONAL | Timestamp of when the occurrence happened, see [CloudEvents - Version 1.0 - time](https://github.com/cloudevents/spec/blob/v1.0/spec.md#time). String encoding: [RFC 3339](https://tools.ietf.org/html/rfc3339). If not present, a default value of the time the event was received will be created. |
+| [data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data) | Object | OPTIONAL | Event payload data that is part of the event, see [CloudEvents - Version 1.0 - Event Data](https://github.com/cloudevents/spec/blob/v1.0/spec.md#event-data). This CloudEvents Consumer API only accepts data encoded as `application/json`, the optional attribute [CloudEvents - Version 1.0 - datacontenttype](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is thus not required to be provided by the producer. Furthermore, there are no schema restrictions on the `data` attribute and thus the attribute [CloudEvents - Version 1.0 - dataschema](https://github.com/cloudevents/spec/blob/v1.0/spec.md#datacontenttype) is also not required to be provided. Producer may provide any valid JSON object, but only simple properties of that object will get converted to variables of a process instances of an [event-based process](#) instance later on. |
+| group | String | OPTIONAL | This is an OPTIONAL [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A group identifier that may allow to easier identify a group of related events for a user at the stage of mapping events to a process model. An example could be a domain of events that are most likely related to each other; for example, `billing`. When this field is provided, it will be used to allow adding events that belong to a group to the [mapping table](##external-events). Optimize handles groups case-sensitively. Note: The triplet of `type`, `source`, and `group` will be used as a unique identifier for classes of events. |
+| traceid | String | REQUIRED | This is a REQUIRED [CloudEvents Extension Context Attribute](https://github.com/cloudevents/spec/blob/v1.0/spec.md#extension-context-attributes) that is specific to this API. A traceid is a correlation key that relates multiple events to a single business transaction or process instance in BPMN terms. Events with the same traceid will get correlated into one process instance of an Event Based Process. |
-The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](self-managed/optimize-deployment/configuration/setup-event-based-processes.md) where an event containing this as `data` was mapped:
+The following is an example of a valid propertie's `data` value. Each of those properties would be available as a variable in any [event-based process](#) where an event containing this as `data` was mapped:
```
{
@@ -85,14 +85,14 @@ This method returns no content.
Possible HTTP response status codes:
-| Code | Description |
-| ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| 204 | Request successful |
-| 400 | Returned if some of the properties in the request body are invalid or missing. |
-| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
-| 403 | The Event Based Process feature is not enabled. |
-| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](self-managed/optimize-deployment/configuration/event-based-processes.md#event-ingestion-rest-api-configuration) for information on how to configure this limit. |
-| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
+| Code | Description |
+| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 204 | Request successful |
+| 400 | Returned if some of the properties in the request body are invalid or missing. |
+| 401 | Secret incorrect or missing in HTTP Header `Authorization`. See [Authorization](#authorization) on how to authenticate. |
+| 403 | The Event Based Process feature is not enabled. |
+| 429 | The maximum number of requests that can be serviced at any time has been reached. The response will include a `Retry-After` HTTP header specifying the recommended number of seconds before the request should be retried. See [Configuration](##event-ingestion-rest-api-configuration) for information on how to configure this limit. |
+| 500 | Some error occurred while processing the ingested event, best check the Optimize log. |
## Example
diff --git a/optimize_versioned_docs/version-3.14.0/components/userguide/additional-features/event-based-processes.md b/optimize_versioned_docs/version-3.14.0/components/userguide/additional-features/event-based-processes.md
index f8198a5fd61..3b680c17934 100644
--- a/optimize_versioned_docs/version-3.14.0/components/userguide/additional-features/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.14.0/components/userguide/additional-features/event-based-processes.md
@@ -15,12 +15,12 @@ Once the event-based process feature is correctly configured, you will see a new
:::note
When Camunda activity events are used in event-based processes, Camunda admin authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](#publishing-an-event-based-process) or at any time via the [edit access option](#event-based-process-list---edit-access) in the event-based process list.
-Visit our [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) on authorization management and event-based processes for the reasoning behind this behavior.
+Visit our [technical guide](/#) on authorization management and event-based processes for the reasoning behind this behavior.
:::
## Set up
-You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md) for more information.
+You need to set up the event-based processes feature to make use of this feature. See the [technical guide](/#) for more information.
## Event-based process list
@@ -104,7 +104,7 @@ Defining the `group` property when ingesting the events will allow selecting eve
These are events generated from an existing Camunda BPMN process. Only processes for which Optimize has imported at least one event will be visible for selection. This means the process has to have at least one instance and Optimize has to have been configured to import data from that process.
-See the [technical guide](/self-managed/optimize-deployment/configuration/setup-event-based-processes.md#use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
+See the [technical guide](/##use-camunda-activity-event-sources-for-event-based-processes) for more information on how this is configured.
To add such events, provide the following details:
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/authorization-management.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/authorization-management.md
index d26c080356e..b3cc86dc2e9 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/authorization-management.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/authorization-management.md
@@ -6,7 +6,7 @@ description: "Define which data users are authorized to see."
Camunda 7 only
-User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](components/userguide/additional-features/event-based-processes.md) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
+User authorization management differs depending on whether the entities to manage the authorizations for are originating from adjacent systems like imported data from connected Camunda-BPM engines such as process instances, or whether the entities are fully managed by Camunda Optimize, such as [event-based processes and instances](#) or [collections](components/userguide/collections-dashboards-reports.md). For entities originating from adjacent systems authorizations are managed in the Camunda 7 via Camunda Admin, for the latter the authorizations are managed in Camunda Optimize.
## Camunda 7 data authorizations
@@ -50,4 +50,4 @@ There are entities that only exist in Camunda Optimize and authorizations to the
Camunda 7 only
-Although [event-based processes](components/userguide/additional-features/event-based-processes.md) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](components/userguide/additional-features/event-based-processes.md#event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](components/userguide/additional-features/event-based-processes.md) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](components/userguide/additional-features/event-based-processes.md) either directly when [publishing](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) option.
+Although [event-based processes](#) may include data originating from adjacent systems like the Camunda Engine when using [Camunda Activity Event Sources](##event-sources), they do not enforce any authorizations from Camunda Admin. The reason for that is that multiple sources can get combined in a single [event-based process](#) that may contain conflicting authorizations. It is thus required to authorize users or groups to [event-based processes](#) either directly when [publishing](##publishing-an-event-based-process) them or later on via the [event-based process - Edit Access](##event-based-process-list---edit-access) option.
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/clustering.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/clustering.md
index 752e83f08fc..63c01b06b24 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/clustering.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/clustering.md
@@ -65,9 +65,9 @@ The importing instance has the [history cleanup enabled](./system-configuration.
In the context of event-based process import and clustering, there are two additional configuration properties to consider carefully.
-One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](components/userguide/additional-features/event-based-processes.md). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
+One is specific to each configured Camunda engine [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) and controls whether data from this engine is imported as event source data as well for [event-based processes](#). You need to enable this on the same cluster node for which the [`engines.${engineAlias}.importEnabled`](./system-configuration-platform-7.md) configuration flag is set to `true`.
-[`eventBasedProcess.eventImport.enabled`](./setup-event-based-processes.md) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
+[`eventBasedProcess.eventImport.enabled`](#) controls whether the particular cluster node processes events to create event based process instances. This allows you to run a dedicated node that performs this operation, while other nodes might just feed in Camunda activity events.
### 2. Distributed user sessions - configure shared secret token
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/common-problems.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/common-problems.md
index dbda43dfbfa..e4e60c861be 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/common-problems.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/common-problems.md
@@ -8,7 +8,7 @@ This section aims to provide initial help to troubleshoot common issues. This gu
## Optimize is missing some or all definitions
-It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](./authorization-management.md#process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
+It is possible that the user you are logged in as does not have the relevant authorizations to view all definitions in Optimize. Refer to the [authorization management section](##process-or-decision-definition-related-authorizations) to confirm the user has all required authorizations.
Another common cause for this type of problem are issues with Optimize's data import, for example due to underlying problems with the engine data. In this case, the Optimize logs should contain more information on what is causing Optimize to not import the definition data correctly. If you are unsure on how to interpret what you find in the logs, create a support ticket.
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/event-based-processes.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/event-based-processes.md
index efc9f48f2f4..c8a0b781a7e 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/event-based-processes.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/event-based-processes.md
@@ -21,7 +21,7 @@ Configuration of the Optimize event-based process feature.
Camunda 7 only
-Configuration of the Optimize [event ingestion REST API](../../../apis-tools/optimize-api/event-ingestion.md) for [event-based processes](components/userguide/additional-features/event-based-processes.md).
+Configuration of the Optimize [event ingestion REST API](#) for [event-based processes](#).
| YAML path | Default value | Description |
| ----------------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/history-cleanup.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/history-cleanup.md
index 773b1824d82..62df0229dad 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/history-cleanup.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/history-cleanup.md
@@ -101,7 +101,7 @@ historyCleanup:
-The age of ingested event data is determined by the [`time`](../../../apis-tools/optimize-api/event-ingestion.md#request-body) field provided for each event at the time of ingestion.
+The age of ingested event data is determined by the [`time`](##request-body) field provided for each event at the time of ingestion.
To enable the cleanup of event data, the `historyCleanup.ingestedEventCleanup.enabled` property needs to be set to `true`.
@@ -113,7 +113,7 @@ historyCleanup:
```
:::note
-The ingested event cleanup does not cascade down to potentially existing [event-based processes](components/userguide/additional-features/event-based-processes.md) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
+The ingested event cleanup does not cascade down to potentially existing [event-based processes](#) that may contain data originating from ingested events. To make sure data of ingested events is also removed from event-based processes, you need to enable the [Process Data Cleanup](#process-data-cleanup) as well.
:::
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/multiple-engines.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/multiple-engines.md
index 10fb4c5074f..a2db8bb9dc0 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/multiple-engines.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/multiple-engines.md
@@ -80,7 +80,7 @@ In general, tests have shown that Optimize puts a very low strain on the engine
## Authentication and authorization in the multiple engine setup
When you configure multiple engines in Optimize, each process engine can host different users with a different set of authorizations. If a user is logging in, Optimize will try to authenticate and authorize the user on each configured engine. In case you are not familiar with how
-the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](./authorization-management.md) documentation first.
+the authorization/authentication works for a single engine scenario, visit the [User Access Management](./user-management.md) and [Authorization Management](#) documentation first.
To determine if a user is allowed to log in and which resources they are allowed to access within the multiple engine scenario, Optimize uses the following algorithm:
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/security-instructions.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/security-instructions.md
index b0ff9980154..23d56f89f73 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/security-instructions.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/security-instructions.md
@@ -55,7 +55,7 @@ Authentication controls who can access Optimize. Read all about how to restrict
Camunda 7 only
-Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](./authorization-management.md).
+Authorization controls what data a user can access and change in Optimize once authenticated. Authentication is a prerequisite to authorization. Read all about how to restrict the data access in the [authorization management guide](#).
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
index 7109c5567f5..6e6d2562da4 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/setup-event-based-processes.md
@@ -33,12 +33,12 @@ A full configuration example authorizing the user `demo` and all members of the
## Use Camunda activity event sources for event based processes
:::note Authorization to event-based processes
-When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](components/userguide/additional-features/event-based-processes.md#publishing-an-event-based-process) or at any time via the [Edit Access Option](components/userguide/additional-features/event-based-processes.md#event-based-process-list---edit-access) in the event-based process List.
+When Camunda activity events are used in event-based processes, Camunda Admin Authorizations are not inherited for the event-based process. The authorization to use an event-based process is solely managed via the access management of event-based processes when [publishing an event-based process](##publishing-an-event-based-process) or at any time via the [Edit Access Option](##event-based-process-list---edit-access) in the event-based process List.
-Visit [Authorization Management - event-based process](./authorization-management.md#event-based-processes) for the reasoning behind this behavior.
+Visit [Authorization Management - event-based process](##event-based-processes) for the reasoning behind this behavior.
:::
-To publish event-based processes that include [Camunda Event Sources](components/userguide/additional-features/event-based-processes.md#camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
+To publish event-based processes that include [Camunda Event Sources](##camunda-events), it is required to set [`engines.${engineAlias}.eventImportEnabled`](./system-configuration-platform-7.md) to `true` for the connected engine the Camunda process originates from.
:::note Heads Up!
You need to [reimport data](./../migration-update/camunda-7/instructions.md#force-reimport-of-engine-data-in-optimize) from this engine to have all historic Camunda events available for event-based processes. Otherwise, only new events will be included.
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/system-configuration.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/system-configuration.md
index 7868c25f1a7..80bd91adaf4 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/system-configuration.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/system-configuration.md
@@ -88,8 +88,8 @@ These values control mechanisms of Optimize related security, e.g. security head
| |
| security.auth.token.lifeMin | 60 | Optimize uses token-based authentication to keep track of which users are logged in. Define the lifetime of the token in minutes. |
| security.auth.token.secret | null | Optional secret used to sign authentication tokens, it's recommended to use at least a 64-character secret. If set to `null` a random secret will be generated with each startup of Optimize. |
-| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
-| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](./authorization-management.md). |
+| security.auth.superUserIds | [ ] | List of user IDs that are granted full permission to all collections, reports, and dashboards.
Note: For reports, these users are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
+| security.auth.superGroupIds | [ ] | List of group IDs that are granted full permission to all collections, reports, and dashboards. All members of the groups specified will have superuser permissions in Optimize.
Note: For reports, these groups are still required to be granted access to the corresponding process/decision definitions in Camunda 7 Admin. See [Authorization Management](#). |
| security.responseHeaders.HSTS.max-age | 63072000 | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. This field defines the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. If you set the number to a negative value no HSTS header is sent. |
| security.responseHeaders.HSTS.includeSubDomains | true | HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking. If this optional parameter is specified, this rule applies to all the site’s subdomains as well. |
| security.responseHeaders.X-XSS-Protection | 1; mode=block | This header enables the cross-site scripting (XSS) filter in your browser. Can have one of the following options:- `0`: Filter disabled.
- `1`: Filter enabled. If a cross-site scripting attack is detected, in order to stop the attack, the browser will sanitize the page.
- `1; mode=block`: Filter enabled. Rather than sanitize the page, when a XSS attack is detected, the browser will prevent rendering of the page.
- `1; report=http://[YOURDOMAIN]/your_report_URI`: Filter enabled. The browser will sanitize the page and report the violation. This is a Chromium function utilizing CSP violation reports to send details to a URI of your choice.
|
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/user-management.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/user-management.md
index 001faa9d4cc..7dbdedb0485 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/user-management.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/configuration/user-management.md
@@ -10,7 +10,7 @@ description: "Define which users have access to Optimize."
Providing Optimize access to a user just enables them to log in to Optimize. To be able
to create reports, the user also needs to have permission to access the engine data. To see
-how this can be done, refer to the [Authorization Management](./authorization-management.md) section.
+how this can be done, refer to the [Authorization Management](#) section.
:::
You can use the credentials from the Camunda 7 users to access Optimize. However, for the users to gain access to Optimize, they need to be authorized. This is not done in Optimize itself, but needs to be configured in the Camunda 7 and can be achieved on different levels with different options. If you do not know how authorization in Camunda works, visit the [authorization service documentation](https://docs.camunda.org/manual/latest/user-guide/process-engine/authorization-service/).
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/install-and-start.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/install-and-start.md
index 2c99ebde0ef..f86b039a148 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/install-and-start.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/install-and-start.md
@@ -159,7 +159,7 @@ For an OpenSearch installation:
A complete sample can be found within [Connect to remote Camunda 7 and database](#connect-to-remote-camunda-7-and-database).
-Furthermore, there are also environment variables specific to the [event-based process](components/userguide/additional-features/event-based-processes.md) feature you may make use of:
+Furthermore, there are also environment variables specific to the [event-based process](#) feature you may make use of:
- `OPTIMIZE_CAMUNDA_BPM_EVENT_IMPORT_ENABLED`: Determines whether this instance of Optimize should convert historical data to event data usable for event-based processes (default: `false`)
- `OPTIMIZE_EVENT_BASED_PROCESSES_USER_IDS`: An array of user ids that are authorized to administer event-based processes (default: `[]`)
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md
index 81c0db8fe39..f3372235561 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/2.7-to-3.0.md
@@ -44,7 +44,7 @@ The update should now successfully complete.
### Cannot disable import from particular engine
-In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./../../configuration/clustering.md), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
+In 3.0.0, it is not possible to deactivate the import of a particular Optimize instance from a particular engine (via `engines.${engineAlias}.importEnabled`). In case your environment is using that feature for e.g. a [clustering setup](./.#), we recommend you to stay on Optimize 2.7.0 until the release of Optimize 3.1.0 (Scheduled for 14/07/2020) and then update straight to Optimize 3.1.0.
## Limitations
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md
index c455b994fba..fda2e0fcea9 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.10-to-3.11.md
@@ -47,7 +47,7 @@ With this release, the minimum version of Java that Optimize supports is now Jav
### Plugins
-Optimize now runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins and are updating from version 3.10.3 or earlier, you will need to adjust your implementation accordingly.
+Optimize now runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](#) and the [Single-Sign-On Plugin](#) now import jakarta dependencies. If you use these plugins and are updating from version 3.10.3 or earlier, you will need to adjust your implementation accordingly.
### Logging
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md
index 5b203450a62..7dd2c0cf029 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-7/3.9-to-3.10.md
@@ -40,7 +40,7 @@ From Optimize 3.10.4, the minimum version of Java that Optimize supports is now
### Plugins
-From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
+From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](#) and the [Single-Sign-On Plugin](#) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
### Logging
diff --git a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
index 862e36c8211..98c1b78c5fd 100644
--- a/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
+++ b/optimize_versioned_docs/version-3.14.0/self-managed/optimize-deployment/migration-update/camunda-8/3.9-to-3.10.md
@@ -49,7 +49,7 @@ From Optimize 3.10.4, the minimum version of Java that Optimize supports is now
### Plugins
-From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](./../../plugins/engine-rest-filter-plugin.md) and the [Single-Sign-On Plugin](./../../plugins/single-sign-on.md) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
+From 3.10.4, Optimize runs with Spring Boot 3. As a result, some plugin interfaces have been updated accordingly. More specifically, the [Engine Rest Filter Plugin](#) and the [Single-Sign-On Plugin](#) now import jakarta dependencies. If you use these plugins, you will need to adjust your implementation accordingly.
### Logging
diff --git a/sidebars.js b/sidebars.js
index edd1d995612..56d9d106b4e 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -471,10 +471,6 @@ module.exports = {
"Creating reports",
"components/userguide/creating-reports/"
),
- optimizeLink(
- "Combined process reports",
- "components/userguide/combined-process-reports/"
- ),
optimizeLink("Process KPIs", "components/userguide/process-KPIs/"),
{
@@ -573,33 +569,12 @@ module.exports = {
],
},
- {
- "Decision analysis": [
- optimizeLink(
- "Overview",
- "components/userguide/decision-analysis/decision-analysis-overview/"
- ),
- optimizeLink(
- "Single report",
- "components/userguide/decision-analysis/decision-report/"
- ),
- optimizeLink(
- "Filters",
- "components/userguide/decision-analysis/decision-filter/"
- ),
- ],
- },
-
{
"Additional features": [
optimizeLink(
"Alerts",
"components/userguide/additional-features/alerts/"
),
- optimizeLink(
- "Event-based processes",
- "components/userguide/additional-features/event-based-processes/"
- ),
optimizeLink(
"Export and import",
"components/userguide/additional-features/export-import/"
@@ -1217,14 +1192,6 @@ module.exports = {
"Camunda 8 system configuration",
"self-managed/optimize-deployment/configuration/system-configuration-platform-8/"
),
- optimizeLink(
- "Camunda 7 system configuration",
- "self-managed/optimize-deployment/configuration/system-configuration-platform-7/"
- ),
- optimizeLink(
- "Event-based process system configuration",
- "self-managed/optimize-deployment/configuration/event-based-process-configuration/"
- ),
],
},
@@ -1232,10 +1199,6 @@ module.exports = {
"Logging",
"self-managed/optimize-deployment/configuration/logging/"
),
- optimizeLink(
- "Optimize license key",
- "self-managed/optimize-deployment/configuration/optimize-license/"
- ),
optimizeLink(
"Security instructions",
"self-managed/optimize-deployment/configuration/security-instructions/"
@@ -1256,34 +1219,10 @@ module.exports = {
"Object and list variable support",
"self-managed/optimize-deployment/configuration/object-variables/"
),
- optimizeLink(
- "Clustering",
- "self-managed/optimize-deployment/configuration/clustering/"
- ),
- optimizeLink(
- "Webhooks",
- "self-managed/optimize-deployment/configuration/webhooks/"
- ),
- optimizeLink(
- "Authorization management",
- "self-managed/optimize-deployment/configuration/authorization-management/"
- ),
- optimizeLink(
- "User access management",
- "self-managed/optimize-deployment/configuration/user-management/"
- ),
optimizeLink(
"Multi-tenancy",
"self-managed/optimize-deployment/configuration/multi-tenancy/"
),
- optimizeLink(
- "Multiple process engines",
- "self-managed/optimize-deployment/configuration/multiple-engines/"
- ),
- optimizeLink(
- "Event-based processes",
- "self-managed/optimize-deployment/configuration/setup-event-based-processes/"
- ),
optimizeLink(
"Common problems",
"self-managed/optimize-deployment/configuration/common-problems/"
@@ -1478,10 +1417,6 @@ module.exports = {
"Engine data deletion",
"self-managed/optimize-deployment/advanced-features/engine-data-deletion/"
),
- optimizeLink(
- "Data import",
- "self-managed/optimize-deployment/advanced-features/import-guide/"
- ),
],
},
],