Skip to content

Commit

Permalink
Merge pull request #105 from IBM/publish-es-11.5.0
Browse files Browse the repository at this point in the history
Publishing ES 11.5.0 docs
  • Loading branch information
meechiekitten authored Sep 4, 2024
2 parents eb71f0e + 4b20fa0 commit 4620401
Show file tree
Hide file tree
Showing 143 changed files with 15,569 additions and 69 deletions.
27 changes: 25 additions & 2 deletions _config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -270,11 +270,17 @@ collections:
"es_11.4":
output: true
version: "11.4"
permalink: /es/:collection/:categories/:slug/
order: 15
product: es
"es_11.5":
output: true
version: "11.5"
tag: Latest
latest: true
permalink: /es/:categories/:slug/
order: 15
product: es
order: 16
product: es
"eem_11.0":
version: "11.0"
output: true
Expand Down Expand Up @@ -664,6 +670,23 @@ defaults:
author_profile: false
share: false
comment: false
mastheadNavItem: Event Streams
versioned: true
sidebar:
nav: "114docs"
# _11.5
- scope:
path: ""
type: "es_11.5"
values:
collection: "es_11.5"
version: "11.5"
product: es
layout: single
read_time: false
author_profile: false
share: false
comment: false
latest: true
mastheadNavItem: Event Streams
versioned: true
Expand Down
2 changes: 1 addition & 1 deletion _connectors/kc-sink-ibm-mq/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ categories:
- Messaging
---

{{site.data.reuse.kafka-connect-mq-sink}} supports connecting to IBM MQ in both bindings and client mode, and offers both exactly-once and at-least-once delivery of data from IBM MQ to Apache Kafka.
{{site.data.reuse.kafka-connect-mq-sink}} supports connecting to IBM MQ in both bindings and client mode, and offers both exactly-once and at-least-once delivery of data from IBM MQ to Apache Kafka.
1 change: 0 additions & 1 deletion _connectors/kc-sink-ibm-mq/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ forID: kc-sink-mq
categories: [sink]
---


{{site.data.reuse.es_name}} provides additional help for setting up a Kafka Connect environment and starting the MQ sink connector. Log in to the {{site.data.reuse.es_name}} UI, click the **Toolbox** tab and scroll to the **Connectors** section.

You can download the MQ sink connector from GitHub:
Expand Down
2 changes: 1 addition & 1 deletion _connectors/kc-source-ibm-mq/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ categories:
- Messaging
---

{{site.data.reuse.kafka-connect-mq-source}} supports connecting to IBM MQ in both bindings and client mode, and offers both exactly-once and at-least-once delivery of data from IBM MQ to Apache Kafka.
{{site.data.reuse.kafka-connect-mq-source}} supports connecting to IBM MQ in both bindings and client mode, and offers both exactly-once and at-least-once delivery of data from IBM MQ to Apache Kafka.
237 changes: 236 additions & 1 deletion _data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1033,7 +1033,7 @@ latest_epdocs:
- title: "Event Processing fails with `Unable to find main key backup` error"
url: /troubleshooting/no-main-key-backup/

# Event Streams 11.4 docs
# Event Streams 11.5 docs

latestdocs:
- title: About
Expand All @@ -1051,6 +1051,241 @@ latestdocs:
- title: "Partition leadership"
url: /about/partition-leadership/

- title: Installing and upgrading
children:
- title: "Trying out Event Streams"
url: /installing/trying-out/
- title: "Prerequisites"
url: /installing/prerequisites/
- title: "Planning your installation"
url: /installing/planning/
- title: "Considerations for multizone deployments"
url: /installing/multizone-considerations/
- title: "Performance and capacity planning"
url: /installing/capacity-planning/
- title: "Preparing for multizone clusters"
url: /installing/preparing-multizone/
- title: "Installing on OpenShift Container Platform"
url: /installing/installing/
- title: "Installing in an offline environment"
url: /installing/offline/
- title: "Installing on other Kubernetes platforms"
url: /installing/installing-on-kubernetes/
- title: "Configuring"
url: /installing/configuring/
- title: "Integrating with Event Endpoint Management"
url: /installing/integrating-eem/
- title: "Post-installation tasks"
url: /installing/post-installation/
- title: "Backing up and restoring on OpenShift"
url: /installing/backup-restore/
- title: "Configuring disaster recovery topologies"
url: /installing/disaster-recovery/
- title: "Migrating from open-source Kafka"
url: /installing/moving-from-oss-kafka/
- title: "Uninstalling"
url: /installing/uninstalling/
- title: "Upgrading"
url: /installing/upgrading/
- title: Getting started
children:
- title: "Logging in"
url: /getting-started/logging-in/
- title: "Creating a Kafka topic"
url: /getting-started/creating-topics/
- title: "Managing Kafka topics"
url: /getting-started/managing-topics/
- title: "Running a starter application"
url: /getting-started/generating-starter-app/
- title: "Creating and testing message loads"
url: /getting-started/testing-loads/
- title: "Creating Kafka client applications"
url: /getting-started/client/
- title: "Connecting clients"
url: /getting-started/connecting/
- title: "Using Apache Kafka console tools"
url: /getting-started/using-kafka-console-tools/
- title: "Sharing topic with Event Endpoint Management"
url: /getting-started/sharing-topic/
- title: Schemas
children:
- title: "Schemas overview"
url: /schemas/overview/
- title: "Creating and adding schemas"
url: /schemas/creating/
- title: "Managing schema lifecycle"
url: /schemas/manage-lifecycle/
- title: "Setting Java applications to use schemas"
url: /schemas/setting-java-apps/
- title: "Setting non-Java applications to use schemas"
url: /schemas/setting-nonjava-apps/
- title: "Migrating to Event Streams schema registry"
url: /schemas/migrating/
- title: "Using schemas with the REST producer API"
url: /schemas/using-with-rest-producer/
- title: "Setting Java applications to use schemas with the Apicurio Registry serdes library"
url: /schemas/setting-java-apps-apicurio-serdes/
- title: Security
children:
- title: "Managing access"
url: /security/managing-access/
- title: "Encrypting your data"
url: /security/encrypting-data/
- title: "Configuring secure JMX connections"
url: /security/secure-jmx-connections/
- title: "Renewing certificates"
url: /security/renewing-certificates/
- title: "Verifying container image signatures"
url: /security/verifying-signature/
- title: "Network policies"
url: /security/network-policies/
- title: "Considerations for GDPR"
url: /security/gdpr-considerations/
- title: "Enabling FIPS"
url: /security/fips/
- title: Topic mirroring
children:
- title: "About topic mirroring"
url: /mirroring/about/
- title: "About MirrorMaker"
url: /mirroring/mirrormaker/
- title: "Switching clusters"
url: /mirroring/failover/
- title: Geo-replication
children:
- title: "About geo-replication"
url: /georeplication/about/
- title: "Planning for geo-replication"
url: /georeplication/planning/
- title: "Setting up geo-replication"
url: /georeplication/setting-up/
- title: "Monitoring and managing geo-replication"
url: /georeplication/health/
- title: Connecting external systems
children:
- title: "Event Streams producer API"
url: /connecting/rest-api/
- title: "Exposing services for external access"
url: /connecting/expose-service/
- title: "Kafka Bridge"
url: /connecting/kafka-bridge/
- title: "Kafka Connect and connectors"
url: /connecting/connectors/
- title: "Setting up and running connectors"
url: /connecting/setting-up-connectors/
- title: "Connecting to IBM MQ"
url: /connecting/mq/
- title: "Running the MQ source connector"
url: /connecting/mq/source/
- title: "Running the MQ sink connector"
url: /connecting/mq/sink/
- title: "Running connectors on IBM z/OS"
url: /connecting/mq/zos/
- title: Administering
children:
- title: "Monitoring deployment health"
url: /administering/deployment-health/
- title: "Monitoring Kafka cluster health"
url: /administering/cluster-health/
- title: "Monitoring topic health"
url: /administering/topic-health/
- title: "Monitoring consumer group lag"
url: /administering/consumer-lag/
- title: "Monitoring applications with distributed tracing"
url: /administering/tracing/
- title: "Auditing Kafka"
url: /administering/auditing-kafka/
- title: "Monitoring with external tools"
url: /administering/external-monitoring/
- title: "Modifying installation settings"
url: /administering/modifying-installation/
- title: "Optimizing Kafka cluster with Cruise Control"
url: /administering/cruise-control/
- title: "Scaling"
url: /administering/scaling/
- title: "Setting client quotas"
url: /administering/quotas/
- title: "Managing a multizone setup"
url: /administering/managing-multizone/
- title: "Stopping and starting Event Streams"
url: /administering/stopping-starting/
- title: Reference
children:
- title: "API reference for the Event Streams CRDs"
url: /reference/api-reference-es/
- title: "API reference for the geo-replicator CRDs"
url: /reference/api-reference-esgr/
- title: REST Producer API
url: /../api/
- title: Schema registry API
url: /../schema-api/
- title: Troubleshooting
children:
- title: "Troubleshooting overview"
url: /troubleshooting/intro/
- title: "Gathering logs"
url: /troubleshooting/gathering-logs/
- title: "Resources not available"
url: /troubleshooting/resources-not-available/
- title: "Event Streams installation reports Blocked status"
url: /troubleshooting/es-install-fails/
- title: "Error when creating multiple geo-replicators"
url: /troubleshooting/georeplication-error/
- title: "TimeoutException when using standard Kafka producer"
url: /troubleshooting/kafka-producer-error/
- title: "Standard Kafka consumer hangs"
url: /troubleshooting/kafka-consumer-hangs/
- title: "Command 'cloudctl es' fails with 'not a registered command' error"
url: /troubleshooting/cloudctl-es-not-registered/
- title: "Command 'cloudctl es' produces 'FAILED' message"
url: /troubleshooting/cloudctl-es-fails/
- title: "UI does not open when using Chrome on Ubuntu"
url: /troubleshooting/chrome-ubuntu-issue/
- title: "Unable to remove destination cluster"
url: /troubleshooting/error-removing-destination/
- title: "403 error when signing in"
url: /troubleshooting/ui-403-error/
- title: "Event Streams not installing due to Security Context Constraint (SCC) issues"
url: /troubleshooting/default-scc-issues/
- title: "Client receives AuthorizationException when communicating with brokers"
url: /troubleshooting/authorization-failed-exceptions/
- title: "Client receives 'Failed to load SSL keystore' message when communicating with brokers"
url: /troubleshooting/pkcs12-keystore-java-client/
- title: "OpenShift upgrade: fixing scheduling on node and node degraded errors"
url: /troubleshooting/ocp-upgrade-fail/
- title: "Apicurio authentication errors due to User Operator watchedNamespace"
url: /troubleshooting/watched-namespace/
- title: "Clients using schemas fail with Apicurio 2.5.0 or later"
url: /troubleshooting/upgrade-apicurio/
- title: "KafkaRebalance custom resource remains in PendingProposal state"
url: /troubleshooting/kafkarebalance-pendingproposal/
- title: "Event Streams not installing due to Pod Security Policies (PSP) issues"
url: /troubleshooting/default-psp-issues/
- title: "Geo-replicator fails when replicating a topic"
url: /troubleshooting/georep-fails/
- title: "Errors in IBM MQ connectors"
url: /troubleshooting/mq-connector-fails/
- title: "Upgrading to 11.4.x fails due to `inter.broker.protocol.version`"
url: /troubleshooting/kafka-protocol-error/

# Sidebar navigation for Event Streams 11.4 docs

114docs:
- title: About
children:
- title: "Introduction"
url: /about/overview/
- title: "What's new"
url: /about/whats-new/
- title: "Key concepts"
url: /about/key-concepts/
- title: "Producing messages"
url: /about/producing-messages/
- title: "Consuming messages"
url: /about/consuming-messages/
- title: "Partition leadership"
url: /about/partition-leadership/

- title: Installing and upgrading
children:
- title: "Trying out Event Streams"
Expand Down
3 changes: 2 additions & 1 deletion _data/reuse.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,9 @@ egw: "Event Gateway"
# {{site.data.reuse.eem_manager}}
eem_manager: "Event Manager"


# {{site.data.reuse.eem_ubp_license_id}}
eem_ubp_license_id: "L-GGQD-G7AYJD"
eem_ubp_license_id: "L-FTGN-WUM5C5"

# {{site.data.reuse.egw_short}}
egw_short: "Event Gateway"
Expand Down
4 changes: 3 additions & 1 deletion _eem_11.3/02-installing/12-upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,9 @@ Before you can upgrade to the latest version, make the catalog source for the ve
- Specific versions: If you used the CASE bundle to install catalog source for a specific previous version, you must download and use a new CASE bundle for the version you want to upgrade to.
- If you used the CASE bundle for an online install, [apply the new catalog source](../installing/#adding-specific-versions) to update the `CatalogSource`.
- If you used the CASE bundle for an offline install that uses a private registry, follow the instructions in [installing offline](../offline/#download-the-case-bundle) to remirror images and update the `CatalogSource`.
- In both cases, wait for the `status.installedCSV` field in the `Subscription` to update. It should eventually reflect the latest version available in the new `CatalogSource` image for the currently selected channel in the `Subscription`. In the {{site.data.reuse.openshift_short}} web console, the current version of the operator is shown under `Installed Operators`. Using the CLI, when you check the status of the `Subscription` custom resource, the `status.installedCSV` field shows the current operator version.
- In both cases, wait for the `status.installedCSV` field in the `Subscription` to update. It eventually reflects the latest version available in the new `CatalogSource` image for the currently selected channel in the `Subscription`:
- In the {{site.data.reuse.openshift_short}} web console, the current version of the operator is displayed under `Installed Operators`.
- If you are using the CLI, check the status of the `Subscription` custom resource, the `status.installedCSV` field shows the current operator version.

The change to a new Channel, if needed, would be a later step.

Expand Down
8 changes: 5 additions & 3 deletions _ep_1.2/02-installing/12-upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,9 @@ Before you can upgrade to the latest version, the catalog source for the new ver
- If you used the CASE bundle for an online install, [apply the new catalog source](../installing/#adding-specific-versions) to update the `CatalogSource`.
- If you used the CASE bundle for an offline install that uses a private registry, follow the instructions in [installing offline](../offline/#download-the-case-bundle) to remirror images and update the `CatalogSource`.

- In both cases, wait for the `status.installedCSV` field in the `Subscription` to update. It should eventually reflect the latest version available in the new `CatalogSource` image for the currently selected channel in the `Subscription`. In the {{site.data.reuse.openshift_short}} web console, the current version of the operator is shown under `Installed Operators`. Using the CLI, when you check the status of the `Subscription` custom resource, the `status.installedCSV` field shows the current operator version.
- In both cases, wait for the `status.installedCSV` field in the `Subscription` to update. It eventually reflects the latest version available in the new `CatalogSource` image for the currently selected channel in the `Subscription`:
- In the {{site.data.reuse.openshift_short}} web console, the current version of the operator is displayed under `Installed Operators`.
- If you are using the CLI, check the status of the `Subscription` custom resource, the `status.installedCSV` field shows the current operator version.



Expand Down Expand Up @@ -304,9 +306,9 @@ After the upgrade, verify the status of the {{site.data.reuse.ep_name}} and Flin

After upgrading your operators to {{site.data.reuse.ep_name}} 1.2.x from version 1.1.x, you must update [the license ID]({{ '/support/licensing/#ibm-event-automation-license-information' | relative_url }}) value in the `spec.license.license` field of your custom resources, depending on the program that you purchased.

You can make this change via the console or the CLI/API, and it is required for both OLM and Helm installations.
You can make this change by using the web console or the CLI, and it is required for both OLM and Helm installations.

The components will be in an error state until you do this, and will not run the new version until the new license ID is entered. After you change the license IDs, check the custom resource status to confirm they are successfully running the new version.
The components will show errors and will not work with the new version until you update the license ID. After you change the license IDs, check the custom resource status to confirm they are successfully running the new version.

### Restart your flows

Expand Down
11 changes: 9 additions & 2 deletions _ep_1.2/03-getting-started/02-canvas.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,17 @@ The clothing company selected **Include historical** to run the filter on the hi

## Flow statuses

A flow status indicates the current state of the flow. A flow can be in one of the following states:
A flow status indicates the current state of the flow. To view the status of a flow, navigate to the **Flows** section on the homepage of the {{site.data.reuse.ep_name}} UI. Each flow tile displays the current status of the flow.

![Flow tiles displaying various statuses]({{ 'images' | relative_url }}/flowcard-status.png "Image of flow tiles displaying various statuses")


A flow can be in one of the following states:

- **Draft:** Indicates that the flow includes one or more nodes that need to be configured. The flow cannot be run.
- **Valid:** Indicates that all nodes in the flow are configured and valid. The flow is ready to run.
- **Invalid:** Indicates that the nodes in the flow are configured but have a validation error, or a required node is missing. The flow cannot be run.
- **Running:** Indicates that the flow is configured, validated, running, and generating output.
- **Error:** Indicates that an error occurred during the runtime of a previously running flow.
- **Error:** Indicates that an error occurred during the runtime of a previously running flow.

**Tip:** You can click the icon next to **Invalid** and **Error** states to find more information about the error.
Loading

0 comments on commit 4620401

Please sign in to comment.