Skip to content

Commit

Permalink
First import commit
Browse files Browse the repository at this point in the history
Signed-off-by: Dj Walker-Morgan <[email protected]>
  • Loading branch information
djw-m committed Nov 21, 2024
1 parent c3605de commit b4058ad
Show file tree
Hide file tree
Showing 13 changed files with 412 additions and 200 deletions.
97 changes: 61 additions & 36 deletions product_docs/docs/tpa/23/architecture-M1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,8 @@ originalFilePath: architecture-M1.md

---

A Postgres cluster with one or more active locations, each with the same
number of Postgres nodes and an extra Barman node. Optionally, there can
also be a location containing only a witness node, or a location
containing only a single node, even if the active locations have more
than one.
A Postgres cluster with a single primary node and physical replication
to a number of standby nodes including backup and failover management.

This architecture is suitable for production and is also suited to
testing, demonstrating and learning due to its simplicity and ability to
Expand All @@ -19,25 +16,53 @@ If you select subscription-only EDB software with this architecture
it will be sourced from EDB Repos 2.0 and you will need to
[provide a token](reference/edb_repositories/).

## Application and backup failover

The M1 architecture implements failover management in that it ensures
that a replica will be promoted to take the place of the primary should
the primary become unavailable. However it *does not provide any
automatic facility to reroute application traffic to the primary*. If
you require, automatic failover of application traffic you will need to
configure this at the application itself (for example using multi-host
connections) or by using an appropriate proxy or load balancer and the
facilities offered by your selected failover manager.

The above is also true of the connection between the backup node and the
primary created by TPA. The backup will not be automatically adjusted to
target the new primary in the event of failover, instead it will remain
connected to the original primary. If you are performing a manual
failover and wish to connect the backup to the new primary, you may
simply re-run `tpaexec deploy`. If you wish to automatically change the
backup source, you should implement this using your selected failover
manager as noted above.
## Failover management

The M1 architecture always includes a failover manager. Supported
options are repmgr, EDB Failover Manager (EFM) and Patroni. In all
cases, the failover manager will be configured by default to ensure that
a replica will be promoted to take the place of the primary should the
primary become unavailable.

### Application failover

The M1 architecture does not generally provide an automatic facility to
reroute application traffic to the primary. There are several ways you
can add this capability to your cluster.

In TPA:

- If you choose repmgr as the failover manager and enable PgBouncer, you
can include the `repmgr_redirect_pgbouncer: true` hash under
`cluster_vars` in `config.yml`. This causes repmgr to automatically
reconfigure PgBouncer to route traffic to the new primary on failover.

- If you choose Patroni as the failover manager and enable PgBouncer,
Patroni will automatically reconfigure PgBouncer to route traffic to
the new primary on failover.

- If you choose EFM as the failover manager, you can use the
`efm_conf_settings` hash under `cluster_vars` in `config.yml` to
[configure EFM to use a virtual IP address
(VIP)](/efm/latest/04_configuring_efm/05_using_vip_addresses/). This
is an additional IP address which will always route to the primary
node.

- Place an appropriate proxy or load balancer between the cluster and
you application and use a [TPA hook](tpaexec-hooks/) to configure
your selected failover manager to update it with the route to the new
primary on failover.

- Handle failover at the application itself, for example by using
multi-host connection strings.

### Backup failover

TPA does not configure any kind of 'backup failover'. If the Postgres
node from which you are backing up is down, backups will simply halt
until the node is back online. To manually connect the backup to the new
primary, edit `config.yml` to add the `backup` hash to the new primary
instance and re-run `tpaexec deploy`.

## Cluster configuration

Expand Down Expand Up @@ -78,18 +103,18 @@ More detail on the options is provided in the following section.

#### Additional Options

| Parameter | Description | Behaviour if omitted |
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
| `--location-names` | A space-separated list of location names. The number of active locations is equal to the number of names supplied, minus one for each of the witness-only location and the single-node location if they are requested. | A single location called "main" is used. |
| `--primary-location` | The location where the primary server will be. Must be a member of `location-names`. | The first listed location is used. |
| `--data-nodes-per-location` | A number from 1 upwards. In each location, one node will be configured to stream directly from the cluster's primary node, and the other nodes, if present, will stream from that one. | Defaults to 2. |
| `--witness-only-location` | A location name, must be a member of `location-names`. | No witness-only location is added. |
| `--single-node-location` | A location name, must be a member of `location-names`. | No single-node location is added. |
| `--enable-haproxy` | 2 additional nodes will be added as a load balancer layer.<br/>Only supported with Patroni as the failover manager. | HAproxy nodes will not be added to the cluster. |
| `--enable-pgbouncer` | PgBouncer will be configured in the Postgres nodes to pool connections for the primary. | PgBouncer will not be configured in the cluster. |
| `--patroni-dcs` | Select the Distributed Configuration Store backend for patroni.<br/>Only option is `etcd` at this time. <br/>Only supported with Patroni as the failover manager. | Defaults to `etcd`. |
| `--efm-bind-by-hostname` | Enable efm to use hostnames instead of IP addresses to configure the cluster `bind.address`. | Defaults to use IP addresses |
| Parameter | Description | Behaviour if omitted |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ |
| `--platform` | One of `aws`, `docker`, `bare`. | Defaults to `aws`. |
| `--location-names` | A space-separated list of location names. The number of locations is equal to the number of names supplied. | A single location called "main" is used. |
| `--primary-location` | The location where the primary server will be. Must be a member of `location-names`. | The first listed location is used. |
| `--data-nodes-per-location` | A number from 1 upwards. In each location, one node will be configured to stream directly from the cluster's primary node, and the other nodes, if present, will stream from that one. | Defaults to 2. |
| `--witness-only-location` | A location name, must be a member of `location-names`. This location will be populated with a single witness node only. | No witness-only location is added. |
| `--single-node-location` | A location name, must be a member of `location-names`. This location will be populated with a single data node only. | No single-node location is added. |
| `--enable-haproxy` | Two additional nodes will be added as a load balancer layer.<br/>Only supported with Patroni as the failover manager. | HAproxy nodes will not be added to the cluster. |
| `--enable-pgbouncer` | PgBouncer will be configured in the Postgres nodes to pool connections for the primary. | PgBouncer will not be configured in the cluster. |
| `--patroni-dcs` | Select the Distributed Configuration Store backend for patroni.<br/>Only option is `etcd` at this time. <br/>Only supported with Patroni as the failover manager. | Defaults to `etcd`. |
| `--efm-bind-by-hostname` | Enable efm to use hostnames instead of IP addresses to configure the cluster `bind.address`. | Defaults to use IP addresses |

<br/><br/>

Expand Down
10 changes: 5 additions & 5 deletions product_docs/docs/tpa/23/architecture-PGD-Always-ON.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ originalFilePath: architecture-PGD-Always-ON.md

---

!!! Note
!!!Note

This architecture is for Postgres Distributed 5 only.
If you require PGD 4 or 3.7 please use [BDR-Always-ON](architecture-BDR-Always-ON/).
!!!
If you require PGD 4 or 3.7 please use [BDR-Always-ON](BDR-Always-ON/).

Check failure on line 11 in product_docs/docs/tpa/23/architecture-PGD-Always-ON.mdx

View workflow job for this annotation

GitHub Actions / check-links

pathCheck

invalid URL path: BDR-Always-ON/ (/tpa/23/BDR-Always-ON)

EDB Postgres Distributed 5 in an Always-ON configuration,
suitable for use in test and production.
Expand Down Expand Up @@ -85,9 +85,9 @@ data centre that provides a level of redundancy, in whatever way
this definition makes sense to your use case. For example, AWS
regions, your own data centres, or any other designation to identify
where your servers are hosted.
!!!


!!! Note Note for AWS users
!!! Note for AWS users

If you are using TPA to provision an AWS cluster, the locations will
be mapped to separate availability zones within the `--region` you
Expand Down
14 changes: 14 additions & 0 deletions product_docs/docs/tpa/23/reference/barman.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -90,3 +90,17 @@ them to each other's authorized_keys file. The postgres user must be
able to ssh to the barman server in order to archive WAL segments (if
configured), and the barman user must be able to ssh to the Postgres
instance to take or restore backups.

## `barman` and `barman_role` Postgres users

TPA will create two Postgres users, `barman` and `barman_role`.

TPA versions `<23.35` created the `barman` Postgres user as a `superuser`.

Beginning with `23.35` the `barman` user is created with `NOSUPERUSER`,
so any re-deploys on existing clusters will remove the `superuser` attribute
from the `barman` Postgres user. Instead, the `barman_role` is granted the
required set of privileges and the `barman` user is granted `barman_role` membership.

This avoids granting the `superuser` attribute to the `barman` user, using the set
of privileges provided in the [Barman Manual](https://docs.pgbarman.org/release/latest/#postgresql-connection).
5 changes: 3 additions & 2 deletions product_docs/docs/tpa/23/reference/harp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ to `harp`, which is the default for BDR-Always-ON clusters.
You must provide the `harp-manager` and `harp-proxy` packages. Please
contact EDB to obtain access to these packages.

## Configuring HARP
## Variables for HARP configuration

See the [HARP documentation](https://www.enterprisedb.com/docs/pgd/4/harp/04_configuration/)
for more details on HARP configuration.
Expand Down Expand Up @@ -41,6 +41,7 @@ for more details on HARP configuration.
| `harp_proxy_max_client_conn` | `75` | Maximum number of client connections accepted by harp-proxy (`max_client_conn`) |
| `harp_ssl_password_command` | None | a custom command that should receive the obfuscated sslpassword in the stdin and provide the handled sslpassword via stdout. |
| `harp_db_request_timeout` | `10s` | similar to dcs -> request_timeout, but for connection to the database itself. |
| `harp_local_etcd_only` | None | limit harp manager endpoints list to only contain the local etcd node instead of all etcd nodes |

You can use the
[harp-config hook](../tpaexec-hooks/#harp-config)
Expand Down Expand Up @@ -114,7 +115,7 @@ provide api endpoints to monitor service's health.

The variable can contain these keys:

```
```yaml
enable: false
secure: false
cert_file: "/etc/tpa/harp_proxy/harp_proxy.crt"
Expand Down
24 changes: 18 additions & 6 deletions product_docs/docs/tpa/23/reference/pgbouncer.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,28 @@
---
description: Adding pgbouncer to your Postgres cluster.
title: Configuring pgbouncer
description: Adding PgBouncer to your Postgres cluster.
title: Configuring PgBouncer
originalFilePath: pgbouncer.md

---

TPA will install and configure pgbouncer on instances whose `role`
TPA will install and configure PgBouncer on instances whose `role`
contains `pgbouncer`.

By default, pgbouncer listens for connections on port 6432 and forwards
connections to `127.0.0.1:5432` (which may be either Postgres or
[haproxy](haproxy/), depending on the architecture).
By default, PgBouncer listens for connections on port 6432 and, if no
`pgbouncer_backend` is specified, forwards connections to
`127.0.0.1:5432` (which may be either Postgres or [haproxy](haproxy/),
depending on the architecture).

!!!Note Using PgBouncer to route traffic to the primary

If you are using the M1 architecture with repmgr you can set
`repmgr_redirect_pgbouncer: true` hash under `cluster_vars` to have
PgBouncer connections directed to the primary. The PgBouncer will be
automatically updated on failover to route to the new primary. You
should use this option in combination with setting `pgbouncer_backend`
to the primary instance name to ensure that the cluster is initially
deployed with PgBouncer configured to route to the primary.
!!!

You can set the following variables on any `pgbouncer` instance.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ the package containing the extension.

- [Adding the *vector* extension through configuration](reconciling-local-changes/)
- [Specifying extensions for configured databases](postgres_databases/)
- [Including shared preload entries for extensions](postgresql.conf/#shared_preload_libraries)
- [Including shared preload entries for extensions](postgresql.conf/#shared-preload-libraries)

Check warning on line 28 in product_docs/docs/tpa/23/reference/postgres_extension_configuration.mdx

View workflow job for this annotation

GitHub Actions / check-links

slugCheck

cannot find slug for #shared-preload-libraries in product_docs/docs/tpa/23/reference/postgresql.conf.mdx
- [Installing Postgres-related packages](postgres_installation_method_pkg/)

## TPA recognized extensions
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ are supported.
container of the target operating system and uses that system's package
manager to resolve dependencies and download all necessary packages. The
required Docker setup for download-packages is the same as that for
[using Docker as a deployment platform](../platform-docker).
[using Docker as a deployment platform](#platform-docker).

Check warning on line 27 in product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx

View workflow job for this annotation

GitHub Actions / check-links

slugCheck

cannot find slug for #platform-docker in product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx

## Usage

Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/tpa/23/rel_notes/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ title: Trusted Postgres Architect release notes
navTitle: Release notes
description: Release notes for Trusted Postgres Architect and later
navigation:
- tpa_23.35_rel_notes
- tpa_23.34.1_rel_notes
- tpa_23.34_rel_notes
- tpa_23.33_rel_notes
Expand Down Expand Up @@ -36,6 +37,7 @@ The Trusted Postgres Architect documentation describes the latest version of Tru

| Trusted Postgres Architect version | Release Date |
|---|---|
| [23.35](./tpa_23.35_rel_notes) | 25 Nov 2024 |
| [23.34.1](./tpa_23.34.1_rel_notes) | 09 Sep 2024 |
| [23.34](./tpa_23.34_rel_notes) | 22 Aug 2024 |
| [23.33](./tpa_23.33_rel_notes) | 24 Jun 2024 |
Expand Down
Loading

0 comments on commit b4058ad

Please sign in to comment.