From d00dffee771d33ea8ef9fd21ed0ee9062ce18c72 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 29 Oct 2024 11:12:21 -0400 Subject: [PATCH 1/5] Edits to Postgres Distributed for Kubernetes v1.0.1 #6152 --- .../1/architecture.mdx | 4 +- .../1/backup.mdx | 44 +++++++++---------- .../1/connectivity.mdx | 22 +++++----- .../1/labels_annotations.mdx | 7 ++- .../1/ldap.mdx | 26 +++++------ .../1/managed.mdx | 11 +++-- .../1/mutations.mdx | 31 +++++++------ .../1/pg4k-pgd.v1beta1.mdx | 6 +-- .../1/recovery.mdx | 26 +++++------ 9 files changed, 87 insertions(+), 90 deletions(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx index 41c66e7a42b..b338ba5839a 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx @@ -98,8 +98,8 @@ Two kinds of routing are available with PGD proxies: In EDB Postgres Distributed for Kubernetes, local routing is used by default, and a configuration option is available to select global routing. -For more information, see the -[PGD documentation of routing with Raft](/pgd/latest/routing/raft/). +For more information on routing with Raft, see +[Proxies, Raft, and Raft subgroups](/pgd/latest/routing/raft/) in the PGD documentation. ### PGD architectures and high availability diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx index eafa8914125..06dcec4838a 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx @@ -60,19 +60,19 @@ The `.spec.backup.schedulers[].method` field allows you to define the scheduled - `volumeSnapshot` - `barmanObjectStore` (the default) -You can define more than one scheduler, but each method can only be used by one -scheduler, i.e. two schedulers are not allowed to use the same method. +You can define more than one scheduler, but each method can be used by only one +scheduler. That is, two schedulers aren't allowed to use the same method. -For object store backups, with the default `barmanObjectStore` method, the stanza -`spec.backup.configuration.barmanObjectStore` is used to define the object store information for both backup and wal archiving. -More information can be found in [EDB Postgres for Kubernetes Backup on Object Stores](/postgres_for_kubernetes/latest/backup_barmanobjectstore/). +For object store backups, with the default `barmanObjectStore` method, use the stanza +`spec.backup.configuration.barmanObjectStore` to define the object store information for both backup and WAL archiving. +For more information, see [Backup on object stores](/postgres_for_kubernetes/latest/backup_barmanobjectstore/) in the EDB Postgres for Kubernetes documentation. -To perform volumeSnapshot backups, the `volumeSnapshot` method can be selected. -The stanza -`spec.backup.configuration.barmanObjectStore.volumeSnapshot` is used to define the volumeSnapshot configuration. -More information can be found in [EDB Postgres for Kubernetes Backup on Volume Snapshots](/postgres_for_kubernetes/latest/backup_volumesnapshot/). +To perform volumeSnapshot backups, you can select the `volumeSnapshot` method. +Use the stanza +`spec.backup.configuration.barmanObjectStore.volumeSnapshot` to define the volumeSnapshot configuration. +For more information, see [Backup on volume snapshots](/postgres_for_kubernetes/latest/backup_volumesnapshot/) in the EDB Postgres for Kubernetes documentation. -The following example shows how to use the `volumeSnapshot` method for backup. WAL archiving is still done onto the barman object store. +This example shows how to use the `volumeSnapshot` method for backup. WAL archiving is still done onto the Barman object store. ```yaml apiVersion: pgd.k8s.enterprisedb.io/v1beta1 @@ -104,10 +104,10 @@ spec: immediate: true ``` -For more information about the comparison of two backup methods, see [EDB Postgres for Kubernetes for Object stores or volume snapshots](/postgres_for_kubernetes/latest/backup/#object-stores-or-volume-snapshots-which-one-to-use). +For a comparison of these two backup methods, see [Object stores or volume snapshots](/postgres_for_kubernetes/latest/backup/#object-stores-or-volume-snapshots-which-one-to-use) in the EDB Postgres for Kubernetes documentation. The `.spec.backup.schedulers[].schedule` field allows you to define a cron schedule, expressed -in the [Go `cron` package format](https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format). +in the [Go `cron` package format](https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format): ```yaml apiVersion: pgd.k8s.enterprisedb.io/v1beta1 @@ -123,28 +123,28 @@ spec: immediate: true ``` -You can suspend scheduled backups if necessary by setting `.spec.backup.schedulers[].suspend` to `true`. -This will prevent new backups from being scheduled. +If necessary, you can suspend scheduled backups by setting `.spec.backup.schedulers[].suspend` to `true`. +This setting prevents new backups from being scheduled. If you want to execute a backup as soon as the `ScheduledBackup` resource is created, set `.spec.backup.schedulers[].immediate` to `true`. `.spec.backupOwnerReference` indicates the `ownerReference` to use -in the created backup resources. The choices are: +in the created backup resources. The options are: -- **none** — No owner reference for created backup objects. -- **self** — Sets the `ScheduledBackup` object as owner of the backup. -- **cluster** — Sets the cluster as owner of the backup. +- **none** — Doesn't set an owner reference for created backup objects. +- **self** — Sets the `ScheduledBackup` object as owner of the backup. +- **cluster** — Sets the cluster as owner of the backup. !!! Warning - The `.spec.backup.cron` field is now deprecated. Please use + The `.spec.backup.cron` field is deprecated. Use `.spec.backup.schedulers` instead. - Note that while `.spec.backup.cron` can still be used, it cannot - be used simultaneously with `.spec.backup.schedulers`. + While you can still use `.spec.backup.cron`, you can't use it + with `.spec.backup.schedulers`. !!! Note The EDB Postgres for Kubernetes `ScheduledBackup` object contains the `cluster` option to specify the - cluster to back up. This option is currently not supported by EDB Postgres Distributed for Kubernetes and is + cluster to back up. This option currently isn't supported by EDB Postgres Distributed for Kubernetes and is ignored if specified. If an elected backup node is deleted, the operator transparently elects a new backup node diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/connectivity.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/connectivity.mdx index b32266ff3ff..32884097dad 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/connectivity.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/connectivity.mdx @@ -20,14 +20,14 @@ PGD cluster includes: Resources in a PGD cluster are accessible through Kubernetes services. Every PGD group manages several of them, namely: -- One service per node, used for internal communications (*node service*) +- One service per node, used for internal communications (*node service*). - A *group service* to reach any node in the group, used primarily by EDB Postgres Distributed for Kubernetes - to discover a new group in the cluster + to discover a new group in the cluster. - A *proxy service* to enable applications to reach the write leader of the - group transparently using PGD Proxy + group transparently using PGD Proxy. - A *proxy-r service* to enable applications to reach the read nodes of the - group, transparently using PGD Proxy. This service is disabled by default - and controlled by the `.spec.proxySettings.enableReadNodeRouting` setting + group transparently using PGD Proxy. This service is disabled by default + and controlled by the `.spec.proxySettings.enableReadNodeRouting` setting. For an example that uses these services, see [Connecting an application to a PGD cluster](#connecting-to-a-pgd-cluster-from-an-application). @@ -58,7 +58,7 @@ Proxy Service Template Proxy Read Service Template : Each PGD group has a proxy service to reach the group read nodes through - the PGD proxy, can be enabled by `.spec.proxySettings.enableReadNodeRouting`, + the PGD proxy. Can be enabled by `.spec.proxySettings.enableReadNodeRouting`, and can be configured in the `.spec.connectivity.proxyReadServiceTemplate` section. This is the entry-point service for the applications. @@ -169,11 +169,11 @@ either manually or automated, by updating the content of the secret. ## Connecting to a PGD cluster from an application -Connecting to a PGD Group from an application running inside the same Kubernetes cluster -or from outside the cluster is a simple procedure. In both cases, you will connect to -the proxy service of the PGD Group as the `app` user. The proxy service is a LoadBalancer -service that will route the connection to the write leader or read nodes of the PGD Group, -depending on which proxy service is connecting to. +Connecting to a PGD group from an application running inside the same Kubernetes cluster +or from outside the cluster is a simple procedure. In both cases, you connect to +the proxy service of the PGD group as the `app` user. The proxy service is a LoadBalancer +service that routes the connection to the write leader or read nodes of the PGD group, +depending on the proxy service it's connecting to. ### Connecting from inside the cluster diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/labels_annotations.mdx index d40cab60aee..dc653ffc188 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/labels_annotations.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/labels_annotations.mdx @@ -41,9 +41,8 @@ their metadata cleaned up before creating the PGD node. This is written by the restore job. `k8s.pgd.enterprisedb.io/hash` -: Holds the hash of the certain part of PGDGroup spec that is utilized in various entities -like `Cluster`, `ScheduledBackup`, `StatefulSet`, and `Service (node, group and proxy service)` -to determine if any updates are required for the corresponding resources. +: To determine if any updates are required for the corresponding resources, holds the hash of the certain part of PGDGroup spec that's used in entities +like `Cluster`, `ScheduledBackup`, `StatefulSet`, and `Service (node, group and proxy service)`. `k8s.pgd.enterprisedb.io/latestCleanupExecuted` : Set in the PGDGroup to indicate that the cleanup was executed. @@ -53,7 +52,7 @@ to determine if any updates are required for the corresponding resources. generated. Added to the certificate resources. `k8s.pgd.enterprisedb.io/nodeRestartHash` -: Stores the hash of the CNP configuration in PGDGroup, a restart is needed when the configuration +: Stores the hash of the CNP configuration in PGDGroup. A restart is needed when the configuration is changed. `k8s.pgd.enterprisedb.io/noFinalizers` diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx index c042e377e38..bb7dda7aa74 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx @@ -1,22 +1,22 @@ --- -title: 'LDAP Authentication' +title: 'LDAP authentication' originalFilePath: 'src/ldap.md' --- -EDB Postgres Distributed for Kubernetes supports LDAP authentication, +EDB Postgres Distributed for Kubernetes supports LDAP authentication. LDAP configuration on EDB Postgres Distributed for Kubernetes relies on the -implementation from EDB Postgres for Kubernetes (PG4K). Please refer to -[the PG4K documentation](/postgres_for_kubernetes/latest/postgresql_conf/#ldap-configuration) +implementation from EDB Postgres for Kubernetes (PG4K). See the +[PG4K documentation](/postgres_for_kubernetes/latest/postgresql_conf/#ldap-configuration) for the full context. !!! Important - Before you proceed, please take some time to familiarize with the - [LDAP authentication feature in the postgres documentation](https://www.postgresql.org/docs/current/auth-ldap.html). + Before you proceed, familiarize yourself with the + [LDAP authentication feature in the Postgres documentation](https://www.postgresql.org/docs/current/auth-ldap.html). -With LDAP support, only the user authentication is sent to LDAP, so the user must already exist in the postgres database. +With LDAP support, only the user authentication is sent to LDAP, so the user must already exist in the postgres database. -Here is an example of LDAP configuration using `simple bind` mode in PGDGroup, -postgres simply use `prefix + username + suffix` and password to bind the LDAP +This example shows an LDAP configuration using `simple bind` mode in PGDGroup. +Use `prefix + username + suffix` and password to bind the LDAP server to achieve the authentication. ```yaml @@ -31,10 +31,10 @@ spec: suffix: ",dc=example,dc=org" ``` -Here is a example of LDAP configuration using `search+bind` mode in PGDGroup. -In this mode, the postgres is first bound to the LDAP using `bindDN` with its password stored -in the secret `bindPassword`, then postgres tries to perform a search under `baseDN` to find a -username that matches the item specified by `searchAttribute`, if a match is found, postgres finally +This example shows configuring LDAP using `search+bind` mode in PGDGroup. +In this mode, the postgres database is first bound to the LDAP using `bindDN` with its password stored +in the secret `bindPassword`. Then Postgres tries to perform a search under `baseDN` to find a +username that matches the item specified by `searchAttribute`. If a match is found, Postgres finally verifies the entry and the password against the LDAP server. ```yaml diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx index d19b9520628..2a1bc4e4688 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx @@ -1,12 +1,12 @@ --- -title: 'Managed Configuration' +title: 'Managed configuration' originalFilePath: 'src/managed.md' --- -The PGD operator allows configuring the `managed` section of a PG4K cluster. The `spec.cnp.managed` stanza -is used for configuring the supported managed roles within the cluster. +The PGD operator allows configuring the `managed` section of a PGD4K cluster. The `spec.cnp.managed` stanza +is used for configuring the supported managed roles in the cluster. -In this example, a pgdgroup is configured to have a managed role named `foo` with the specified properties set up in postgres. +In this example, a PGDgroup is configured to have a managed role named `foo` with the specified properties set up in postgres: ```yaml apiVersion: pgd.k8s.enterprisedb.io/v1beta1 @@ -30,5 +30,4 @@ spec: replication: true ``` -For more information about managed roles, refer to [EDB Postgres for Kubernetes recovery - Database Role Management](/postgres_for_kubernetes/latest/declarative_role_management/) - +For more information about managed roles, see [Database role management](/postgres_for_kubernetes/latest/declarative_role_management/) in the EDB Postgres for Kubernetes documentation. diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/mutations.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/mutations.mdx index 7ff807ad911..184090a2526 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/mutations.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/mutations.mdx @@ -3,15 +3,15 @@ title: 'SQLMutations' originalFilePath: 'src/mutations.md' --- -SQLMutations consist of a list of SQL queries to be executed on the application -database via the superuser role after a pgd node joins the pgdgroup. Each +SQLMutations consist of a list of SQL queries to execute on the application +database via the superuser role after a PGD node joins the PGDgroup. Each SQLMutation includes an `isApplied` list of queries and an `exec` list of queries. -The `isApplied` SQL queries are used to check if the mutation has already been +The `isApplied` SQL queries are used to check if the mutation was already applied. If any of the `isApplied` queries return false, the `exec` list of SQL -queries will be added to the execution queue. +queries is added to the execution queue. -Here is a sample of SQLMutations +Here's a sample of SQLMutations: ```yaml apiVersion: pgd.k8s.enterprisedb.io/v1beta1 @@ -39,24 +39,23 @@ spec: ``` -## SQLMutation Types +## SQLMutation types -The operator offers three types of SQLMutations, which can be specified by `spec.pgd.mutations[].type`, with `always` -being the default option. +The operator offers three types of SQLMutations, which you specify with `spec.pgd.mutations[].type`. The default is `always`. -- beforeSubgroupRaft -- always -- writeLeader +- `beforeSubgroupRaft` +- `always` +- `writeLeader` The `beforeSubgroupRaft` and `always` mutations are evaluated in every reconcile loop. The difference between the two mutations lies in their execution phase: -- For `always` mutations, they are run in each reconcile loop without any restrictions on the pgdgroup. -- On the other hand, `beforeSubgroupRaft` mutations are only executed if the pgdgroup has defined data nodes - and pgd proxies, and specifically before the subgroup raft becomes ready. +- For `always` mutations, they're run in each reconcile loop without any restrictions on the PGDgroup. +- `beforeSubgroupRaft` mutations are executed only if the PGDgroup has defined data nodes + and PGD proxies, and specifically before the subgroup Raft becomes ready. -Both `beforeSubgroupRaft` and `always` mutations can run on any pgd node within the pgdgroup, including witness nodes. -Therefore, they should not be used for making data changes to the application database, as witness nodes do not contain +Both `beforeSubgroupRaft` and `always` mutations can run on any PGD node in the PGDgroup, including witness nodes. +Therefore, don't use them for making data changes to the application database, as witness nodes don't contain application database data. The `writeLeader` mutation is triggered and executed after the write leader is elected. The `exec` operations diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/pg4k-pgd.v1beta1.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/pg4k-pgd.v1beta1.mdx index 3bf6a6d1b7d..1bdb2f4e32f 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/pg4k-pgd.v1beta1.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/pg4k-pgd.v1beta1.mdx @@ -1,11 +1,11 @@ --- -title: 'API Reference' +title: 'API reference' originalFilePath: 'src/pg4k-pgd.v1beta1.md' --- -

Package v1beta1 contains API Schema definitions for the pgd v1beta1 API group

+

Package v1beta1 contains API schema definitions for the pgd v1beta1 API group.

-## Resource Types +## Resource types - [PGDGroup](#pgd-k8s-enterprisedb-io-v1beta1-PGDGroup) - [PGDGroupCleanup](#pgd-k8s-enterprisedb-io-v1beta1-PGDGroupCleanup) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx index c1ab7eff3e9..65e66e6f133 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx @@ -5,7 +5,7 @@ originalFilePath: 'src/recovery.md' In EDB Postgres Distributed for Kubernetes, recovery is available as a way to bootstrap a new PGD group starting from an available physical backup of a PGD node. -Recovery can't be performed in-place on an existing PGD group. +Recovery can't be performed in place on an existing PGD group. EDB Postgres Distributed for Kubernetes also supports point-in-time recovery (PITR), which allows you to restore a PGD group up to any point in time, from the first available backup in your catalog to the last archived @@ -157,7 +157,7 @@ spec: ``` !!! Important - When a `backupID` is specified, make sure to list only the related PGD node + When you specify a `backupID`, make sure to list only the related PGD node in the `serverNames` option, and avoid listing the other ones. !!! Note @@ -168,12 +168,12 @@ spec: ## Recovery from volumeSnapshot -You can also recover a pgdgroup from a volumeSnapshot backup. Stanza +You can also recover a PGDgroup from a volumeSnapshot backup. Stanza `spec.restore.volumeSnapshots` is used to define the criteria for volumeSnapshots restore candidates. The operator transparently selects the latest volumeSnapshot among the candidates. The operator requires the following annotations/labels in the volumeSnapshot. These -annotations/labels will be automatically added if volumeSnapshots are taken by the operator. +annotations/labels are automatically added if volumeSnapshots are taken by the operator. Annotations: @@ -185,12 +185,12 @@ Labels: - `k8s.enterprisedb.io/cluster` indicates the node where the volumeSnapshot is taken, crucial for fetching the serverName in the object store for WAL replaying. -- `k8s.enterprisedb.io/backupName` is the backup name of the volumeSnapshot, used to group - volumeSnapshots, when more volumes are defined in the backup. -- `k8s.enterprisedb.io/tablespaceName` represents the tablespace name of the volumeSnapshot, when +- `k8s.enterprisedb.io/backupName` is the backup name of the volumeSnapshot. Used to group + volumeSnapshots when more volumes are defined in the backup. +- `k8s.enterprisedb.io/tablespaceName` represents the tablespace name of the volumeSnapshot when the volumeSnapshot role is `PG_TABLESPACE`. -The following example illustrates a full recovery from volumeSnapshots. After the volumeSnapshot recovery, +This example shows a full recovery from volumeSnapshots. After the volumeSnapshot recovery, WAL replaying for full recovery will target server `pgdgroup-backup-vs-1`. ```yaml @@ -221,14 +221,14 @@ spec: maxParallel: 8 ``` -For more information, please see [EDB Postgres for Kubernetes recovery from volumeSnapshot objects](/postgres_for_kubernetes/latest/recovery/#recovery-from-volumesnapshot-objects). +For more information, see [Recovery from volumeSnapshot objects](/postgres_for_kubernetes/latest/recovery/#recovery-from-volumesnapshot-objects) in the EDB Postgres for Kubernetes documentation. ## PITR from volumeSnapshot -Same as when doing recovery from an object store, you can instruct PostgreSQL to halt the replay of Write-Ahead Logs (WALs) -at any specific moment during volumeSnapshot recovery. +You can instruct PostgreSQL to halt the replay of write-ahead logs (WALs) +at any specific moment during volumeSnapshot recovery. This is the same capability as when recovering from an object store. -This example demonstrates setting a time-based target for recovery using volume snapshots. +This example shows setting a time-based target for recovery using volume snapshots: ```yaml apiVersion: pgd.k8s.enterprisedb.io/v1beta1 @@ -263,4 +263,4 @@ spec: ## Recovery targets Beyond PITR are other recovery target criteria you can use. -For more information on all the available recovery targets, see [EDB Postgres for Kubernetes recovery targets](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/recovery/#point-in-time-recovery-pitr) in the EDB Postgres for Kubernetes documentation. +For more information on all the available recovery targets, see [Recovery](/postgres_for_kubernetes/latest/recovery/#point-in-time-recovery-pitr) in the EDB Postgres for Kubernetes documentation. From b7bf00488fd3281d4cad4953aef664ae4648dc0a Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 29 Oct 2024 11:22:39 -0400 Subject: [PATCH 2/5] Apply suggestions from code review --- .../docs/postgres_distributed_for_kubernetes/1/backup.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx index 06dcec4838a..e9185dfeeb9 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx @@ -140,7 +140,7 @@ in the created backup resources. The options are: The `.spec.backup.cron` field is deprecated. Use `.spec.backup.schedulers` instead. While you can still use `.spec.backup.cron`, you can't use it - with `.spec.backup.schedulers`. + at the same time as `.spec.backup.schedulers`. !!! Note The EDB Postgres for Kubernetes `ScheduledBackup` object contains the `cluster` option to specify the From 92a0a17a88f1c138314dd3dfc7bf1b94c8f36276 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 7 Nov 2024 10:57:52 -0500 Subject: [PATCH 3/5] Update product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx --- .../docs/postgres_distributed_for_kubernetes/1/managed.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx index 2a1bc4e4688..f8bfb9e7a8a 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/managed.mdx @@ -3,7 +3,7 @@ title: 'Managed configuration' originalFilePath: 'src/managed.md' --- -The PGD operator allows configuring the `managed` section of a PGD4K cluster. The `spec.cnp.managed` stanza +The PGD operator allows configuring the `managed` section of a PG4K cluster. The `spec.cnp.managed` stanza is used for configuring the supported managed roles in the cluster. In this example, a PGDgroup is configured to have a managed role named `foo` with the specified properties set up in postgres: From c816b6580b0006deed8e19793ec0d094af3b91a4 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 7 Nov 2024 10:59:36 -0500 Subject: [PATCH 4/5] Update product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx --- .../docs/postgres_distributed_for_kubernetes/1/ldap.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx index bb7dda7aa74..3608f95c682 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx @@ -16,7 +16,7 @@ for the full context. With LDAP support, only the user authentication is sent to LDAP, so the user must already exist in the postgres database. This example shows an LDAP configuration using `simple bind` mode in PGDGroup. -Use `prefix + username + suffix` and password to bind the LDAP +The Postgres server uses `prefix + username + suffix` and password to bind the LDAP server to achieve the authentication. ```yaml From 423db4350c30496b91464aa56585f51cd6192e12 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 7 Nov 2024 10:59:59 -0500 Subject: [PATCH 5/5] Update product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx --- .../docs/postgres_distributed_for_kubernetes/1/ldap.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx index 3608f95c682..a795d14f652 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/ldap.mdx @@ -32,7 +32,7 @@ spec: ``` This example shows configuring LDAP using `search+bind` mode in PGDGroup. -In this mode, the postgres database is first bound to the LDAP using `bindDN` with its password stored +In this mode, the Postgres instance is first bound to the LDAP using `bindDN` with its password stored in the secret `bindPassword`. Then Postgres tries to perform a search under `baseDN` to find a username that matches the item specified by `searchAttribute`. If a match is found, Postgres finally verifies the entry and the password against the LDAP server.