Skip to content

Commit

Permalink
fix: broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
ArthurFlag committed Oct 21, 2024
1 parent 1efc163 commit 7beb222
Show file tree
Hide file tree
Showing 17 changed files with 43 additions and 53 deletions.
30 changes: 15 additions & 15 deletions docs/products/cassandra/howto/use-dsbulk-with-cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Use DSBULK to load, unload and count data on Aiven service for Cassandra®
---

[DSBulk](https://docs.datastax.com/en/dsbulk/docs/reference/dsbulkCmd) is a highly configurable tool used to load, unload and count data in Apache Cassandra®. It has configurable consistency levels for loading and unloading and offers the most accurate way to count records in Cassandra.
[DSBulk](https://docs.datastax.com/en/dsbulk/reference/dsbulk-cmd.html) is a highly configurable tool used to load, unload and count data in Apache Cassandra®. It has configurable consistency levels for loading and unloading and offers the most accurate way to count records in Cassandra.

## Prerequisites

Expand All @@ -13,7 +13,7 @@ repository](https://github.com/datastax/dsbulk).
:::tip
You can read more about the DSBulk different use cases and manual pages
in the [dedicated
documentation](https://docs.datastax.com/en/dsbulk/docs/getting-started/getting-started)
documentation](https://docs.datastax.com/en/dsbulk/installing/install.html)
:::

## Variables
Expand Down Expand Up @@ -52,7 +52,7 @@ truststore.
-trustcacerts \
-alias CARoot \
-file cassandra-certificate.pem \
-keystore client.truststore \
-keystore client.truststore \
-storepass KEYSTORE_PASSWORD
```

Expand Down Expand Up @@ -141,12 +141,12 @@ Once the configuration file is created, you can run the `dsbulk`.
To extract the data from a table, you can use the following command:

```bash
./dsbulk unload \
-f /full/path/to/conf.file \
-k baselines \
-t keyvalue \
-h HOST \
-port PORT \
./dsbulk unload \
-f /full/path/to/conf.file \
-k baselines \
-t keyvalue \
-h HOST \
-port PORT \
-url /directory_for_output
```

Expand All @@ -159,12 +159,12 @@ To load data into a Cassandra table, the command line is very similar to
the previous command:

```bash
./dsbulk load \
-f /full/path/to/conf.file \
-k baselines \
-t keyvalue \
-h HOST \
-port PORT \
./dsbulk load \
-f /full/path/to/conf.file \
-k baselines \
-t keyvalue \
-h HOST \
-port PORT \
-url data.csv
```

Expand Down
6 changes: 3 additions & 3 deletions docs/products/cassandra/howto/zdm-proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ details and, if your source or target require authentication, specify
target username and password.

Check more details on using the credentials in [Client application
credentials](https://docs.datastax.com/en/astra-serverless/docs/migrate/connect-clients-to-proxy#_client_application_credentials).
credentials](https://docs.datastax.com/en/data-migration/introduction.html).

The port that ZDM Proxy uses is 14002, which can be overridden.

Expand Down Expand Up @@ -262,7 +262,7 @@ The port that ZDM Proxy uses is 14002, which can be overridden.
- [zdm-proxy GitHub](https://github.com/datastax/zdm-proxy)
- [Introduction to Zero Downtime
Migration](https://docs.datastax.com/en/astra-serverless/docs/migrate/introduction)
Migration](https://docs.datastax.com/en/data-migration/introduction.html)
- [ZDM Proxy releases](https://github.com/datastax/zdm-proxy/releases)
- [Client application
credentials](https://docs.datastax.com/en/astra-serverless/docs/migrate/connect-clients-to-proxy#_client_application_credentials)
credentials](https://docs.datastax.com/en/data-migration/connect-clients-to-target.html)
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@
title: Handle PostgreSQL® node replacements when using Debezium for change data capture
---

When running a
[Debezium source connector for PostgreSQL®](debezium-source-connector-pg) to capture changes from an Aiven for PostgreSQL® service,
there are some activities on the database side that can impact the
correct functionality of the connector.
When running a [Debezium source connector for PostgreSQL®](debezium-source-connector-pg) to capture changes from an Aiven for PostgreSQL® service, there are some activities on the database side that can impact the correct functionality of the connector.

As example, when the source PostgreSQL service undergoes any operation
which replaces the nodes (such as maintenance, a plan or cloud region
Expand Down Expand Up @@ -42,7 +39,7 @@ of the connector tasks to resume operations again.
A restart can be performed manually either through the [Aiven
Console](https://console.aiven.io/), in under the `Connectors` tab
console or via the [Apache Kafka® Connect REST
API](https://docs.confluent.io/platform/current/connect/references/restapi#rest-api-task-restart).
API](https://docs.confluent.io/cloud/current/kafka-rest/krest-qs.html).
You can get the service URI from the [Aiven
Console](https://console.aiven.io/), in the service detail page.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ source PostgreSQL database upfront:
- `PG_PASSWORD`: The database password for the `PG_USER`
- `PG_DATABASE_NAME`: The database name
- `SSL_MODE`: The [SSL
mode](https://www.postgresql.org/docs/current/libpq-ssl)
mode](https://www.postgresql.org/docs/current/libpq-ssl.html)
- `PG_TABLES`: The list of database tables to be included in Apache
Kafka; the list must be in the form of
`schema_name1.table_name1,schema_name2.table_name2`
Expand Down
14 changes: 4 additions & 10 deletions docs/products/kafka/kafka-connect/howto/mqtt-sink-connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,12 @@ title: Create an MQTT sink connector
---

The [MQTT sink
connector](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/)
connector](https://docs.lenses.io/connectors/kafka-connectors/sources/mqtt)
copies messages from an Apache Kafka® topic to an MQTT queue.

:::note
See the full set of available parameters and configuration
options in the [connector's
documentation](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/).
:::

:::tip
The connector can be used to sink messages to RabbitMQ® where [RabbitMQ
MQTT plugin](https://www.rabbitmq.com/mqtt.html) is enabled.
The connector can be used to sink messages to RabbitMQ® where
[RabbitMQ MQTT plugin](https://www.rabbitmq.com/mqtt.html) is enabled.
:::

## Prerequisites {#connect_mqtt_rbmq_sink_prereq}
Expand Down Expand Up @@ -97,7 +91,7 @@ The configuration file contains the following entries:
this example JSON converter is used.
See the [dedicated
documentation](https://docs.lenses.io/5.0/integrations/connectors/stream-reactor/sinks/mqttsinkconnector/#options)
documentation](https://docs.lenses.io/connectors/kafka-connectors/sources/mqtt#storage-to-output-matrix)
for the full list of parameters.
### Create a Kafka Connect connector with the Aiven Console
Expand Down
4 changes: 2 additions & 2 deletions docs/products/mysql/howto/disable-foreign-key-checks.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ session:
The same flag works when running a set of commands saved in a file with
extension `.sql`.

| Variable | Description |
| ---------- | --------------------------------------------------------- |
| Variable | Description |
|------------|-------------------------------------------------------------------|
| `FILENAME` | File which the extension is `.sql`, for for example, filename.sql |

You can paste the following command on your `FILENAME`:
Expand Down
2 changes: 1 addition & 1 deletion docs/products/mysql/howto/reclaim-disk-space.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can configure InnoDB to release disk space back to the operating system by r
conditions](https://dev.mysql.com/doc/refman/8.0/en/optimize-table.html#optimize-table-innodb-details)
(for example, including the presence of a `FULLTEXT` index), command
`OPTIMIZE TABLE`
[copies](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html#alter-table-performance)
[copies](https://dev.mysql.com/doc/refman/8.0/en/alter-table.html)
the data to a new table containing just the current data, and drops
and renames the new table to match the old one. During this process,
data modification is blocked. This requires enough free space to store
Expand Down
2 changes: 1 addition & 1 deletion docs/products/postgresql/concepts/pg-disk-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ onward, new WAL segments no longer have such a high impact on disk usage
as the service reaches a steady state for low-traffic services.

You can read more about WAL archiving [in the PostgreSQL
manual](https://www.postgresql.org/docs/current/runtime-config-wal#RUNTIME-CONFIG-WAL-ARCHIVING).
manual](https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING).

## High disk usage discrepancy

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,6 @@ would make it warn you when appropriate.
## Related pages

- [25.1.5. Preventing Transaction ID Wraparound
Failures](https://www.postgresql.org/docs/current/routine-vacuuming#VACUUM-FOR-WRAPAROUND)
Failures](https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND)
- [Table 9.76. Transaction ID and Snapshot Information
Functions](https://www.postgresql.org/docs/14/functions-info#FUNCTIONS-PG-SNAPSHOT)
Functions](https://www.postgresql.org/docs/14/functions-info.html#FUNCTIONS-PG-SNAPSHOT)
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ any data written to the source database during the migration.
Before you use the logical replication, make sure you know and
understand all the restrictions it has. For details, see [Logical
replication
restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions).
restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
:::

Using the continuous migration requires either superuser permissions or
Expand Down
2 changes: 1 addition & 1 deletion docs/products/postgresql/howto/migrate-pg-dump-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ We recommend to migrate your PostgreSQL® database to Aiven by using
The [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html)
tool can be used to extract the data from your existing PostgreSQL
database and
[`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore)
[`pg_restore`](https://www.postgresql.org/docs/current/app-pgrestore.html)
can then insert that data into your Aiven for PostgreSQL database. The
duration of the process depends on the size of your existing database.

Expand Down
2 changes: 1 addition & 1 deletion docs/products/postgresql/howto/pg-object-size.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,4 +102,4 @@ information, see
## Related pages
- [PostgreSQL interactive terminal](https://www.postgresql.org/docs/15/app-psql.html)
- [Database Object Management Functions](https://www.postgresql.org/docs/current/functions-admin#FUNCTIONS-ADMIN-DBOBJECT.html)
- [Database Object Management Functions](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT)
2 changes: 1 addition & 1 deletion docs/products/postgresql/howto/repair-pg-index.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ You can run the `REINDEX` command for:

For more information on the `REINDEX` command, see the [PostgreSQL
documentation
page](https://www.postgresql.org/docs/current/sql-reindex).
page](https://www.postgresql.org/docs/current/sql-reindex.html).

## Rebuild unique indexes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ to stop serving clients and a loss of service.
:::

For further information about WAL and checkpoints, read the [PostgreSQL
documentation](https://www.postgresql.org/docs/current/wal-configuration).
documentation](https://www.postgresql.org/docs/current/wal-configuration.html).

:::note
The recreation of replication slots gets enabled automatically for
Expand Down
4 changes: 2 additions & 2 deletions docs/products/postgresql/howto/use-dblink-extension.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Use the PostgreSQL® dblink extension
sidebar_label: Use the dblink extension
---

`dblink` is a [PostgreSQL® extension](https://www.postgresql.org/docs/current/dblink) that allows you to connect to other PostgreSQL databases and to run arbitrary queries.
`dblink` is a [PostgreSQL® extension](https://www.postgresql.org/docs/current/dblink.html) that allows you to connect to other PostgreSQL databases and to run arbitrary queries.

With [Foreign Data
Wrappers](https://www.postgresql.org/docs/current/postgres-fdw.html)
Expand All @@ -26,7 +26,7 @@ information about the PostgreSQL remote server:

:::note
If you're using Aiven for PostgreSQL as remote server, the above
details are available in the [Aiven console](https://console.aiven.io/) > the service's
details are available in the [Aiven console](https://console.aiven.io) > the service's
**Overview** page or via the `avn service get` command with
the [Aiven CLI](/docs/tools/cli/service-cli#avn_service_get).
:::
Expand Down
5 changes: 2 additions & 3 deletions docs/products/postgresql/reference/idle-connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,18 @@ parameters can be used at the client side.
## Keep-alive server side parameters

Currently, the following default keep-alive timeouts are used on the
[server-side](https://www.postgresql.org/docs/current/runtime-config-connection#RUNTIME-CONFIG-CONNECTION-SETTINGS):
[server-side](https://www.postgresql.org/docs/current/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS):

| Parameter (server) | Value | Description |
| ------------------------- | ----- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `tcp_keepalives_idle` | 180 | Specifies the amount of time with no network activity after which the operating system should send a TCP `keepalive` message to the client. |
| `tcp_keepalives_count` | 6 | Specifies the number of TCP `keepalive` messages that can be lost before the server's connection to the client is considered dead. |
| `tcp_keepalives_interval` | 10 | Specifies the amount of time after which a TCP `keepalive` message that has not been acknowledged by the client should be retransmitted. |


## Keep-alive client side parameters

The
[client-side](https://www.postgresql.org/docs/current/libpq-connect#LIBPQ-KEEPALIVES)
[client-side](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-KEEPALIVES)
keep-alive parameters can be set to whatever values you want.

| Parameter (client) | Description |
Expand Down
6 changes: 3 additions & 3 deletions docs/products/postgresql/reference/pg-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ The following metrics are shown:

For most metrics, the metric name identifies the internal PostgreSQL
statistics view. See the [PostgreSQL
documentation](https://www.postgresql.org/docs/current/monitoring-stats)
documentation](https://www.postgresql.org/docs/current/monitoring-stats.html)
for more detailed explanations of the various metric values.

Metrics that are currently recorded but not shown in the default
Expand Down Expand Up @@ -159,8 +159,8 @@ that exclude uninteresting tables.

| Parameter Name | Parameter Definition | Additional Notes |
|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Table size` | The size of tables, excluding indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast) | |
| `Table size total` | The total size of tables, including indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast) | |
| `Table size` | The size of tables, excluding indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast.html) | |
| `Table size total` | The total size of tables, including indexes and [TOAST data](https://www.postgresql.org/docs/current/storage-toast.html) | |
| `Table seq scans / sec` | The number of sequential scans per table per second | For small tables, sequential scans may be the best way of accessing the table data and having a lot of sequential scans may be normal, but for larger tables, sequential scans should be very rare. |
| `Table tuple inserts / sec` | The number of tuples inserted per second | |
| `Table tuple updates / sec` | The number of tuples updated per second | |
Expand Down

0 comments on commit 7beb222

Please sign in to comment.