diff --git a/docs/products/cassandra/howto/use-dsbulk-with-cassandra.md b/docs/products/cassandra/howto/use-dsbulk-with-cassandra.md index 460be5940..e6d401218 100644 --- a/docs/products/cassandra/howto/use-dsbulk-with-cassandra.md +++ b/docs/products/cassandra/howto/use-dsbulk-with-cassandra.md @@ -2,7 +2,7 @@ title: Use DSBULK to load, unload and count data on Aiven service for Cassandra® --- -[DSBulk](https://docs.datastax.com/en/dsbulk/docs/reference/dsbulkCmd) is a highly configurable tool used to load, unload and count data in Apache Cassandra®. It has configurable consistency levels for loading and unloading and offers the most accurate way to count records in Cassandra. +[DSBulk](https://docs.datastax.com/en/dsbulk/reference/dsbulk-cmd.html) is a highly configurable tool used to load, unload and count data in Apache Cassandra®. It has configurable consistency levels for loading and unloading and offers the most accurate way to count records in Cassandra. ## Prerequisites @@ -13,7 +13,7 @@ repository](https://github.com/datastax/dsbulk). :::tip You can read more about the DSBulk different use cases and manual pages in the [dedicated -documentation](https://docs.datastax.com/en/dsbulk/docs/getting-started/getting-started) +documentation](https://docs.datastax.com/en/dsbulk/installing/install.html) ::: ## Variables @@ -52,7 +52,7 @@ truststore. -trustcacerts \ -alias CARoot \ -file cassandra-certificate.pem \ - -keystore client.truststore \ + -keystore client.truststore \ -storepass KEYSTORE_PASSWORD ``` @@ -141,12 +141,12 @@ Once the configuration file is created, you can run the `dsbulk`. To extract the data from a table, you can use the following command: ```bash -./dsbulk unload \ - -f /full/path/to/conf.file \ - -k baselines \ - -t keyvalue \ - -h HOST \ - -port PORT \ +./dsbulk unload \ + -f /full/path/to/conf.file \ + -k baselines \ + -t keyvalue \ + -h HOST \ + -port PORT \ -url /directory_for_output ``` @@ -159,12 +159,12 @@ To load data into a Cassandra table, the command line is very similar to the previous command: ```bash -./dsbulk load \ - -f /full/path/to/conf.file \ - -k baselines \ - -t keyvalue \ - -h HOST \ - -port PORT \ +./dsbulk load \ + -f /full/path/to/conf.file \ + -k baselines \ + -t keyvalue \ + -h HOST \ + -port PORT \ -url data.csv ``` diff --git a/docs/products/cassandra/howto/zdm-proxy.md b/docs/products/cassandra/howto/zdm-proxy.md index 33d159843..e59231e85 100644 --- a/docs/products/cassandra/howto/zdm-proxy.md +++ b/docs/products/cassandra/howto/zdm-proxy.md @@ -118,7 +118,7 @@ details and, if your source or target require authentication, specify target username and password. Check more details on using the credentials in [Client application -credentials](https://docs.datastax.com/en/astra-serverless/docs/migrate/connect-clients-to-proxy#_client_application_credentials). +credentials](https://docs.datastax.com/en/data-migration/introduction.html). The port that ZDM Proxy uses is 14002, which can be overridden. @@ -262,7 +262,7 @@ The port that ZDM Proxy uses is 14002, which can be overridden. - [zdm-proxy GitHub](https://github.com/datastax/zdm-proxy) - [Introduction to Zero Downtime - Migration](https://docs.datastax.com/en/astra-serverless/docs/migrate/introduction) + Migration](https://docs.datastax.com/en/data-migration/introduction.html) - [ZDM Proxy releases](https://github.com/datastax/zdm-proxy/releases) - [Client application - credentials](https://docs.datastax.com/en/astra-serverless/docs/migrate/connect-clients-to-proxy#_client_application_credentials) + credentials](https://docs.datastax.com/en/data-migration/connect-clients-to-target.html) diff --git a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.md b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.md index 22dc21f7b..6d85f26ab 100644 --- a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.md +++ b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-pg-node-replacement.md @@ -2,10 +2,7 @@ title: Handle PostgreSQL® node replacements when using Debezium for change data capture --- -When running a -[Debezium source connector for PostgreSQL®](debezium-source-connector-pg) to capture changes from an Aiven for PostgreSQL® service, -there are some activities on the database side that can impact the -correct functionality of the connector. +When running a [Debezium source connector for PostgreSQL®](debezium-source-connector-pg) to capture changes from an Aiven for PostgreSQL® service, there are some activities on the database side that can impact the correct functionality of the connector. As example, when the source PostgreSQL service undergoes any operation which replaces the nodes (such as maintenance, a plan or cloud region @@ -42,7 +39,7 @@ of the connector tasks to resume operations again. A restart can be performed manually either through the [Aiven Console](https://console.aiven.io/), in under the `Connectors` tab console or via the [Apache Kafka® Connect REST -API](https://docs.confluent.io/platform/current/connect/references/restapi#rest-api-task-restart). +API](https://docs.confluent.io/cloud/current/kafka-rest/krest-qs.html). You can get the service URI from the [Aiven Console](https://console.aiven.io/), in the service detail page. diff --git a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.md b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.md index c878c275c..ce1b97ddd 100644 --- a/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.md +++ b/docs/products/kafka/kafka-connect/howto/jdbc-source-connector-pg.md @@ -33,7 +33,7 @@ source PostgreSQL database upfront: - `PG_PASSWORD`: The database password for the `PG_USER` - `PG_DATABASE_NAME`: The database name - `SSL_MODE`: The [SSL - mode](https://www.postgresql.org/docs/current/libpq-ssl) + mode](https://www.postgresql.org/docs/current/libpq-ssl.html) - `PG_TABLES`: The list of database tables to be included in Apache Kafka; the list must be in the form of `schema_name1.table_name1,schema_name2.table_name2` diff --git a/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.md b/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.md index bd93ed657..acd6d4939 100644 --- a/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.md +++ b/docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.md @@ -3,18 +3,12 @@ title: Create an MQTT sink connector --- The [MQTT sink -connector](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/) +connector](https://docs.lenses.io/connectors/kafka-connectors/sources/mqtt) copies messages from an Apache Kafka® topic to an MQTT queue. -:::note -See the full set of available parameters and configuration -options in the [connector's -documentation](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/). -::: - :::tip -The connector can be used to sink messages to RabbitMQ® where [RabbitMQ -MQTT plugin](https://www.rabbitmq.com/mqtt.html) is enabled. +The connector can be used to sink messages to RabbitMQ® where +[RabbitMQ MQTT plugin](https://www.rabbitmq.com/mqtt.html) is enabled. ::: ## Prerequisites {#connect_mqtt_rbmq_sink_prereq} @@ -97,7 +91,7 @@ The configuration file contains the following entries: this example JSON converter is used. See the [dedicated -documentation](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/#options) +documentation](https://docs.lenses.io/connectors/kafka-connectors/sources/mqtt#storage-to-output-matrix) for the full list of parameters. ### Create a Kafka Connect connector with the Aiven Console diff --git a/docs/products/mysql/howto/disable-foreign-key-checks.md b/docs/products/mysql/howto/disable-foreign-key-checks.md index 741b097c8..4af1fb156 100644 --- a/docs/products/mysql/howto/disable-foreign-key-checks.md +++ b/docs/products/mysql/howto/disable-foreign-key-checks.md @@ -100,8 +100,8 @@ session: The same flag works when running a set of commands saved in a file with extension `.sql`. -| Variable | Description | -| ---------- | --------------------------------------------------------- | +| Variable | Description | +|------------|-------------------------------------------------------------------| | `FILENAME` | File which the extension is `.sql`, for for example, filename.sql | You can paste the following command on your `FILENAME`: diff --git a/docs/products/mysql/howto/reclaim-disk-space.md b/docs/products/mysql/howto/reclaim-disk-space.md index f7a10e570..e046e5b84 100644 --- a/docs/products/mysql/howto/reclaim-disk-space.md +++ b/docs/products/mysql/howto/reclaim-disk-space.md @@ -9,7 +9,7 @@ You can configure InnoDB to release disk space back to the operating system by r conditions](https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html#optimize-table-innodb-details) (for example, including the presence of a `FULLTEXT` index), command `OPTIMIZE TABLE` -[copies](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html#alter-table-performance) +[copies](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html) the data to a new table containing just the current data, and drops and renames the new table to match the old one. During this process, data modification is blocked. This requires enough free space to store diff --git a/docs/products/postgresql/concepts/pg-disk-usage.md b/docs/products/postgresql/concepts/pg-disk-usage.md index e9d7b6186..f24daee24 100644 --- a/docs/products/postgresql/concepts/pg-disk-usage.md +++ b/docs/products/postgresql/concepts/pg-disk-usage.md @@ -24,7 +24,7 @@ onward, new WAL segments no longer have such a high impact on disk usage as the service reaches a steady state for low-traffic services. You can read more about WAL archiving [in the PostgreSQL -manual](https://www.postgresql.org/docs/current/runtime-config-wal#RUNTIME-CONFIG-WAL-ARCHIVING). +manual](https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING). ## High disk usage discrepancy diff --git a/docs/products/postgresql/howto/check-avoid-transaction-id-wraparound.md b/docs/products/postgresql/howto/check-avoid-transaction-id-wraparound.md index 42b1089ab..077de8821 100644 --- a/docs/products/postgresql/howto/check-avoid-transaction-id-wraparound.md +++ b/docs/products/postgresql/howto/check-avoid-transaction-id-wraparound.md @@ -49,6 +49,6 @@ would make it warn you when appropriate. ## Related pages - [25.1.5. Preventing Transaction ID Wraparound - Failures](https://www.postgresql.org/docs/current/routine-vacuuming#VACUUM-FOR-WRAPAROUND) + Failures](https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND) - [Table 9.76. Transaction ID and Snapshot Information - Functions](https://www.postgresql.org/docs/14/functions-info#FUNCTIONS-PG-SNAPSHOT) + Functions](https://www.postgresql.org/docs/14/functions-info.html#FUNCTIONS-PG-SNAPSHOT) diff --git a/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.md b/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.md index 785a95d30..5deca8d0a 100644 --- a/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.md +++ b/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.md @@ -38,7 +38,7 @@ any data written to the source database during the migration. Before you use the logical replication, make sure you know and understand all the restrictions it has. For details, see [Logical replication -restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions). +restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). ::: Using the continuous migration requires either superuser permissions or diff --git a/docs/products/postgresql/howto/migrate-pg-dump-restore.md b/docs/products/postgresql/howto/migrate-pg-dump-restore.md index 714eab2a8..6fb78dd81 100644 --- a/docs/products/postgresql/howto/migrate-pg-dump-restore.md +++ b/docs/products/postgresql/howto/migrate-pg-dump-restore.md @@ -12,7 +12,7 @@ We recommend to migrate your PostgreSQL® database to Aiven by using The [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html) tool can be used to extract the data from your existing PostgreSQL database and -[`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore) +[`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore.html) can then insert that data into your Aiven for PostgreSQL database. The duration of the process depends on the size of your existing database. diff --git a/docs/products/postgresql/howto/pg-object-size.md b/docs/products/postgresql/howto/pg-object-size.md index 1129f3abb..4aa989747 100644 --- a/docs/products/postgresql/howto/pg-object-size.md +++ b/docs/products/postgresql/howto/pg-object-size.md @@ -102,4 +102,4 @@ information, see ## Related pages - [PostgreSQL interactive terminal](https://www.postgresql.org/docs/15/app-psql.html) -- [Database Object Management Functions](https://www.postgresql.org/docs/current/functions-admin#FUNCTIONS-ADMIN-DBOBJECT.html) +- [Database Object Management Functions](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT) diff --git a/docs/products/postgresql/howto/repair-pg-index.md b/docs/products/postgresql/howto/repair-pg-index.md index d7225922c..744e07bf0 100644 --- a/docs/products/postgresql/howto/repair-pg-index.md +++ b/docs/products/postgresql/howto/repair-pg-index.md @@ -35,7 +35,7 @@ You can run the `REINDEX` command for: For more information on the `REINDEX` command, see the [PostgreSQL documentation -page](https://www.postgresql.org/docs/current/sql-reindex). +page](https://www.postgresql.org/docs/current/sql-reindex.html). ## Rebuild unique indexes diff --git a/docs/products/postgresql/howto/setup-logical-replication.md b/docs/products/postgresql/howto/setup-logical-replication.md index d272d72d5..462258632 100644 --- a/docs/products/postgresql/howto/setup-logical-replication.md +++ b/docs/products/postgresql/howto/setup-logical-replication.md @@ -251,7 +251,7 @@ to stop serving clients and a loss of service. ::: For further information about WAL and checkpoints, read the [PostgreSQL -documentation](https://www.postgresql.org/docs/current/wal-configuration). +documentation](https://www.postgresql.org/docs/current/wal-configuration.html). :::note The recreation of replication slots gets enabled automatically for diff --git a/docs/products/postgresql/howto/use-dblink-extension.md b/docs/products/postgresql/howto/use-dblink-extension.md index 34f2900d4..0b137dc69 100644 --- a/docs/products/postgresql/howto/use-dblink-extension.md +++ b/docs/products/postgresql/howto/use-dblink-extension.md @@ -3,7 +3,7 @@ title: Use the PostgreSQL® dblink extension sidebar_label: Use the dblink extension --- -`dblink` is a [PostgreSQL® extension](https://www.postgresql.org/docs/current/dblink) that allows you to connect to other PostgreSQL databases and to run arbitrary queries. +`dblink` is a [PostgreSQL® extension](https://www.postgresql.org/docs/current/dblink.html) that allows you to connect to other PostgreSQL databases and to run arbitrary queries. With [Foreign Data Wrappers](https://www.postgresql.org/docs/current/postgres-fdw.html) @@ -26,7 +26,7 @@ information about the PostgreSQL remote server: :::note If you're using Aiven for PostgreSQL as remote server, the above -details are available in the [Aiven console](https://console.aiven.io/) > the service's +details are available in the [Aiven console](https://console.aiven.io) > the service's **Overview** page or via the `avn service get` command with the [Aiven CLI](/docs/tools/cli/service-cli#avn_service_get). ::: diff --git a/docs/products/postgresql/reference/idle-connections.md b/docs/products/postgresql/reference/idle-connections.md index b52efbce1..3b4cacec5 100644 --- a/docs/products/postgresql/reference/idle-connections.md +++ b/docs/products/postgresql/reference/idle-connections.md @@ -10,7 +10,7 @@ parameters can be used at the client side. ## Keep-alive server side parameters Currently, the following default keep-alive timeouts are used on the -[server-side](https://www.postgresql.org/docs/current/runtime-config-connection#RUNTIME-CONFIG-CONNECTION-SETTINGS): +[server-side](https://www.postgresql.org/docs/current/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS): | Parameter (server) | Value | Description | | ------------------------- | ----- | -------------------------------------------------------------------------------------------------------------------------------------------- | @@ -18,11 +18,10 @@ Currently, the following default keep-alive timeouts are used on the | `tcp_keepalives_count` | 6 | Specifies the number of TCP `keepalive` messages that can be lost before the server's connection to the client is considered dead. | | `tcp_keepalives_interval` | 10 | Specifies the amount of time after which a TCP `keepalive` message that has not been acknowledged by the client should be retransmitted. | - ## Keep-alive client side parameters The -[client-side](https://www.postgresql.org/docs/current/libpq-connect#LIBPQ-KEEPALIVES) +[client-side](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-KEEPALIVES) keep-alive parameters can be set to whatever values you want. | Parameter (client) | Description | diff --git a/docs/products/postgresql/reference/pg-metrics.md b/docs/products/postgresql/reference/pg-metrics.md index e0e8696c6..81ba49dd5 100644 --- a/docs/products/postgresql/reference/pg-metrics.md +++ b/docs/products/postgresql/reference/pg-metrics.md @@ -97,7 +97,7 @@ The following metrics are shown: For most metrics, the metric name identifies the internal PostgreSQL statistics view. See the [PostgreSQL -documentation](https://www.postgresql.org/docs/current/monitoring-stats) +documentation](https://www.postgresql.org/docs/current/monitoring-stats.html) for more detailed explanations of the various metric values. Metrics that are currently recorded but not shown in the default @@ -159,8 +159,8 @@ that exclude uninteresting tables. | Parameter Name | Parameter Definition | Additional Notes | |-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `Table size` | The size of tables, excluding indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast) | | -| `Table size total` | The total size of tables, including indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast) | | +| `Table size` | The size of tables, excluding indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast.html) | | +| `Table size total` | The total size of tables, including indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast.html) | | | `Table seq scans / sec` | The number of sequential scans per table per second | For small tables, sequential scans may be the best way of accessing the table data and having a lot of sequential scans may be normal, but for larger tables, sequential scans should be very rare. | | `Table tuple inserts / sec` | The number of tuples inserted per second | | | `Table tuple updates / sec` | The number of tuples updated per second | |