Skip to content

Commit

Permalink
Merge pull request #6190 from EnterpriseDB/docs/edits_to_pg4pk_pr6152
Browse files Browse the repository at this point in the history
Edits to  Postgres Distributed for Kubernetes v1.0.1 #6152
  • Loading branch information
gvasquezvargas authored Nov 11, 2024
2 parents 38ef545 + 423db43 commit 1a5e372
Show file tree
Hide file tree
Showing 9 changed files with 86 additions and 89 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,8 @@ Two kinds of routing are available with PGD proxies:
In EDB Postgres Distributed for Kubernetes, local routing is used by default, and a configuration option is
available to select global routing.

For more information, see the
[PGD documentation of routing with Raft](/pgd/latest/routing/raft/).
For more information on routing with Raft, see
[Proxies, Raft, and Raft subgroups](/pgd/latest/routing/raft/) in the PGD documentation.

### PGD architectures and high availability

Expand Down
44 changes: 22 additions & 22 deletions product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,19 +60,19 @@ The `.spec.backup.schedulers[].method` field allows you to define the scheduled
- `volumeSnapshot`
- `barmanObjectStore` (the default)

You can define more than one scheduler, but each method can only be used by one
scheduler, i.e. two schedulers are not allowed to use the same method.
You can define more than one scheduler, but each method can be used by only one
scheduler. That is, two schedulers aren't allowed to use the same method.

For object store backups, with the default `barmanObjectStore` method, the stanza
`spec.backup.configuration.barmanObjectStore` is used to define the object store information for both backup and wal archiving.
More information can be found in [EDB Postgres for Kubernetes Backup on Object Stores](/postgres_for_kubernetes/latest/backup_barmanobjectstore/).
For object store backups, with the default `barmanObjectStore` method, use the stanza
`spec.backup.configuration.barmanObjectStore` to define the object store information for both backup and WAL archiving.
For more information, see [Backup on object stores](/postgres_for_kubernetes/latest/backup_barmanobjectstore/) in the EDB Postgres for Kubernetes documentation.

To perform volumeSnapshot backups, the `volumeSnapshot` method can be selected.
The stanza
`spec.backup.configuration.barmanObjectStore.volumeSnapshot` is used to define the volumeSnapshot configuration.
More information can be found in [EDB Postgres for Kubernetes Backup on Volume Snapshots](/postgres_for_kubernetes/latest/backup_volumesnapshot/).
To perform volumeSnapshot backups, you can select the `volumeSnapshot` method.
Use the stanza
`spec.backup.configuration.barmanObjectStore.volumeSnapshot` to define the volumeSnapshot configuration.
For more information, see [Backup on volume snapshots](/postgres_for_kubernetes/latest/backup_volumesnapshot/) in the EDB Postgres for Kubernetes documentation.

The following example shows how to use the `volumeSnapshot` method for backup. WAL archiving is still done onto the barman object store.
This example shows how to use the `volumeSnapshot` method for backup. WAL archiving is still done onto the Barman object store.

```yaml
apiVersion: pgd.k8s.enterprisedb.io/v1beta1
Expand Down Expand Up @@ -104,10 +104,10 @@ spec:
immediate: true
```

For more information about the comparison of two backup methods, see [EDB Postgres for Kubernetes for Object stores or volume snapshots](/postgres_for_kubernetes/latest/backup/#object-stores-or-volume-snapshots-which-one-to-use).
For a comparison of these two backup methods, see [Object stores or volume snapshots](/postgres_for_kubernetes/latest/backup/#object-stores-or-volume-snapshots-which-one-to-use) in the EDB Postgres for Kubernetes documentation.

The `.spec.backup.schedulers[].schedule` field allows you to define a cron schedule, expressed
in the [Go `cron` package format](https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format).
in the [Go `cron` package format](https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format):

```yaml
apiVersion: pgd.k8s.enterprisedb.io/v1beta1
Expand All @@ -123,28 +123,28 @@ spec:
immediate: true
```

You can suspend scheduled backups if necessary by setting `.spec.backup.schedulers[].suspend` to `true`.
This will prevent new backups from being scheduled.
If necessary, you can suspend scheduled backups by setting `.spec.backup.schedulers[].suspend` to `true`.
This setting prevents new backups from being scheduled.

If you want to execute a backup as soon as the `ScheduledBackup` resource is created,
set `.spec.backup.schedulers[].immediate` to `true`.

`.spec.backupOwnerReference` indicates the `ownerReference` to use
in the created backup resources. The choices are:
in the created backup resources. The options are:

- **none** — No owner reference for created backup objects.
- **self** Sets the `ScheduledBackup` object as owner of the backup.
- **cluster** Sets the cluster as owner of the backup.
- **none** — Doesn't set an owner reference for created backup objects.
- **self** — Sets the `ScheduledBackup` object as owner of the backup.
- **cluster** — Sets the cluster as owner of the backup.

!!! Warning
The `.spec.backup.cron` field is now deprecated. Please use
The `.spec.backup.cron` field is deprecated. Use
`.spec.backup.schedulers` instead.
Note that while `.spec.backup.cron` can still be used, it cannot
be used simultaneously with `.spec.backup.schedulers`.
While you can still use `.spec.backup.cron`, you can't use it
at the same time as `.spec.backup.schedulers`.

!!! Note
The EDB Postgres for Kubernetes `ScheduledBackup` object contains the `cluster` option to specify the
cluster to back up. This option is currently not supported by EDB Postgres Distributed for Kubernetes and is
cluster to back up. This option currently isn't supported by EDB Postgres Distributed for Kubernetes and is
ignored if specified.

If an elected backup node is deleted, the operator transparently elects a new backup node
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@ PGD cluster includes:
Resources in a PGD cluster are accessible through Kubernetes services.
Every PGD group manages several of them, namely:

- One service per node, used for internal communications (*node service*)
- One service per node, used for internal communications (*node service*).
- A *group service* to reach any node in the group, used primarily by EDB Postgres Distributed for Kubernetes
to discover a new group in the cluster
to discover a new group in the cluster.
- A *proxy service* to enable applications to reach the write leader of the
group transparently using PGD Proxy
group transparently using PGD Proxy.
- A *proxy-r service* to enable applications to reach the read nodes of the
group, transparently using PGD Proxy. This service is disabled by default
and controlled by the `.spec.proxySettings.enableReadNodeRouting` setting
group transparently using PGD Proxy. This service is disabled by default
and controlled by the `.spec.proxySettings.enableReadNodeRouting` setting.

For an example that uses these services, see [Connecting an application to a PGD cluster](#connecting-to-a-pgd-cluster-from-an-application).

Expand Down Expand Up @@ -58,7 +58,7 @@ Proxy Service Template

Proxy Read Service Template
: Each PGD group has a proxy service to reach the group read nodes through
the PGD proxy, can be enabled by `.spec.proxySettings.enableReadNodeRouting`,
the PGD proxy. Can be enabled by `.spec.proxySettings.enableReadNodeRouting`,
and can be configured in the `.spec.connectivity.proxyReadServiceTemplate`
section. This is the entry-point service for the applications.

Expand Down Expand Up @@ -169,11 +169,11 @@ either manually or automated, by updating the content of the secret.

## Connecting to a PGD cluster from an application

Connecting to a PGD Group from an application running inside the same Kubernetes cluster
or from outside the cluster is a simple procedure. In both cases, you will connect to
the proxy service of the PGD Group as the `app` user. The proxy service is a LoadBalancer
service that will route the connection to the write leader or read nodes of the PGD Group,
depending on which proxy service is connecting to.
Connecting to a PGD group from an application running inside the same Kubernetes cluster
or from outside the cluster is a simple procedure. In both cases, you connect to
the proxy service of the PGD group as the `app` user. The proxy service is a LoadBalancer
service that routes the connection to the write leader or read nodes of the PGD group,
depending on the proxy service it's connecting to.

### Connecting from inside the cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,8 @@ their metadata cleaned up
before creating the PGD node. This is written by the restore job.

`k8s.pgd.enterprisedb.io/hash`
: Holds the hash of the certain part of PGDGroup spec that is utilized in various entities
like `Cluster`, `ScheduledBackup`, `StatefulSet`, and `Service (node, group and proxy service)`
to determine if any updates are required for the corresponding resources.
: To determine if any updates are required for the corresponding resources, holds the hash of the certain part of PGDGroup spec that's used in entities
like `Cluster`, `ScheduledBackup`, `StatefulSet`, and `Service (node, group and proxy service)`.

`k8s.pgd.enterprisedb.io/latestCleanupExecuted`
: Set in the PGDGroup to indicate that the cleanup was executed.
Expand All @@ -53,7 +52,7 @@ to determine if any updates are required for the corresponding resources.
generated. Added to the certificate resources.

`k8s.pgd.enterprisedb.io/nodeRestartHash`
: Stores the hash of the CNP configuration in PGDGroup, a restart is needed when the configuration
: Stores the hash of the CNP configuration in PGDGroup. A restart is needed when the configuration
is changed.

`k8s.pgd.enterprisedb.io/noFinalizers`
Expand Down
26 changes: 13 additions & 13 deletions product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
---
title: 'LDAP Authentication'
title: 'LDAP authentication'
originalFilePath: 'src/ldap.md'
---

EDB Postgres Distributed for Kubernetes supports LDAP authentication,
EDB Postgres Distributed for Kubernetes supports LDAP authentication.
LDAP configuration on EDB Postgres Distributed for Kubernetes relies on the
implementation from EDB Postgres for Kubernetes (PG4K). Please refer to
[the PG4K documentation](/postgres_for_kubernetes/latest/postgresql_conf/#ldap-configuration)
implementation from EDB Postgres for Kubernetes (PG4K). See the
[PG4K documentation](/postgres_for_kubernetes/latest/postgresql_conf/#ldap-configuration)
for the full context.

!!! Important
Before you proceed, please take some time to familiarize with the
[LDAP authentication feature in the postgres documentation](https://www.postgresql.org/docs/current/auth-ldap.html).
Before you proceed, familiarize yourself with the
[LDAP authentication feature in the Postgres documentation](https://www.postgresql.org/docs/current/auth-ldap.html).

With LDAP support, only the user authentication is sent to LDAP, so the user must already exist in the postgres database.
With LDAP support, only the user authentication is sent to LDAP, so the user must already exist in the postgres database.

Here is an example of LDAP configuration using `simple bind` mode in PGDGroup,
postgres simply use `prefix + username + suffix` and password to bind the LDAP
This example shows an LDAP configuration using `simple bind` mode in PGDGroup.
The Postgres server uses `prefix + username + suffix` and password to bind the LDAP
server to achieve the authentication.

```yaml
Expand All @@ -31,10 +31,10 @@ spec:
suffix: ",dc=example,dc=org"
```
Here is a example of LDAP configuration using `search+bind` mode in PGDGroup.
In this mode, the postgres is first bound to the LDAP using `bindDN` with its password stored
in the secret `bindPassword`, then postgres tries to perform a search under `baseDN` to find a
username that matches the item specified by `searchAttribute`, if a match is found, postgres finally
This example shows configuring LDAP using `search+bind` mode in PGDGroup.
In this mode, the Postgres instance is first bound to the LDAP using `bindDN` with its password stored
in the secret `bindPassword`. Then Postgres tries to perform a search under `baseDN` to find a
username that matches the item specified by `searchAttribute`. If a match is found, Postgres finally
verifies the entry and the password against the LDAP server.

```yaml
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: 'Managed Configuration'
title: 'Managed configuration'
originalFilePath: 'src/managed.md'
---

The PGD operator allows configuring the `managed` section of a PG4K cluster. The `spec.cnp.managed` stanza
is used for configuring the supported managed roles within the cluster.
is used for configuring the supported managed roles in the cluster.

In this example, a pgdgroup is configured to have a managed role named `foo` with the specified properties set up in postgres.
In this example, a PGDgroup is configured to have a managed role named `foo` with the specified properties set up in postgres:

```yaml
apiVersion: pgd.k8s.enterprisedb.io/v1beta1
Expand All @@ -30,5 +30,4 @@ spec:
replication: true
```
For more information about managed roles, refer to [EDB Postgres for Kubernetes recovery - Database Role Management](/postgres_for_kubernetes/latest/declarative_role_management/)
For more information about managed roles, see [Database role management](/postgres_for_kubernetes/latest/declarative_role_management/) in the EDB Postgres for Kubernetes documentation.
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ title: 'SQLMutations'
originalFilePath: 'src/mutations.md'
---

SQLMutations consist of a list of SQL queries to be executed on the application
database via the superuser role after a pgd node joins the pgdgroup. Each
SQLMutations consist of a list of SQL queries to execute on the application
database via the superuser role after a PGD node joins the PGDgroup. Each
SQLMutation includes an `isApplied` list of queries and an `exec` list of
queries.
The `isApplied` SQL queries are used to check if the mutation has already been
The `isApplied` SQL queries are used to check if the mutation was already
applied. If any of the `isApplied` queries return false, the `exec` list of SQL
queries will be added to the execution queue.
queries is added to the execution queue.

Here is a sample of SQLMutations
Here's a sample of SQLMutations:

```yaml
apiVersion: pgd.k8s.enterprisedb.io/v1beta1
Expand Down Expand Up @@ -39,24 +39,23 @@ spec:
```
## SQLMutation Types
## SQLMutation types
The operator offers three types of SQLMutations, which can be specified by `spec.pgd.mutations[].type`, with `always`
being the default option.
The operator offers three types of SQLMutations, which you specify with `spec.pgd.mutations[].type`. The default is `always`.

- beforeSubgroupRaft
- always
- writeLeader
- `beforeSubgroupRaft`
- `always`
- `writeLeader`

The `beforeSubgroupRaft` and `always` mutations are evaluated in every reconcile loop. The difference between
the two mutations lies in their execution phase:

- For `always` mutations, they are run in each reconcile loop without any restrictions on the pgdgroup.
- On the other hand, `beforeSubgroupRaft` mutations are only executed if the pgdgroup has defined data nodes
and pgd proxies, and specifically before the subgroup raft becomes ready.
- For `always` mutations, they're run in each reconcile loop without any restrictions on the PGDgroup.
- `beforeSubgroupRaft` mutations are executed only if the PGDgroup has defined data nodes
and PGD proxies, and specifically before the subgroup Raft becomes ready.

Both `beforeSubgroupRaft` and `always` mutations can run on any pgd node within the pgdgroup, including witness nodes.
Therefore, they should not be used for making data changes to the application database, as witness nodes do not contain
Both `beforeSubgroupRaft` and `always` mutations can run on any PGD node in the PGDgroup, including witness nodes.
Therefore, don't use them for making data changes to the application database, as witness nodes don't contain
application database data.

The `writeLeader` mutation is triggered and executed after the write leader is elected. The `exec` operations
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
title: 'API Reference'
title: 'API reference'
originalFilePath: 'src/pg4k-pgd.v1beta1.md'
---

<p>Package v1beta1 contains API Schema definitions for the pgd v1beta1 API group</p>
<p>Package v1beta1 contains API schema definitions for the pgd v1beta1 API group.</p>

## Resource Types
## Resource types

- [PGDGroup](#pgd-k8s-enterprisedb-io-v1beta1-PGDGroup)
- [PGDGroupCleanup](#pgd-k8s-enterprisedb-io-v1beta1-PGDGroupCleanup)
Expand Down
Loading

1 comment on commit 1a5e372

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.