From 16885f3fcdd7eecaa5ccdf146033154b001281d9 Mon Sep 17 00:00:00 2001 From: cnp-autobot <85171364+cnp-autobot@users.noreply.github.com> Date: Thu, 1 Aug 2024 07:23:24 +0000 Subject: [PATCH 1/3] [create-pull-request] automated change --- .../1/before_you_start.mdx | 2 +- .../postgres_for_kubernetes/1/bootstrap.mdx | 32 +- .../1/cluster_conf.mdx | 2 +- .../1/connection_pooling.mdx | 19 + .../1/container_images.mdx | 2 +- .../1/default-monitoring.yaml | 66 +++ .../1/failure_modes.mdx | 6 +- .../openshift-webconsole-multinamespace.png | 4 +- .../1/images/openshift/operatorhub_2.png | 4 +- .../docs/postgres_for_kubernetes/1/index.mdx | 8 +- .../1/installation_upgrade.mdx | 4 +- .../1/kubectl-plugin.mdx | 547 ++---------------- .../1/labels_annotations.mdx | 16 +- .../postgres_for_kubernetes/1/logging.mdx | 8 +- .../postgres_for_kubernetes/1/monitoring.mdx | 2 + .../postgres_for_kubernetes/1/openshift.mdx | 48 +- .../1/operator_capability_levels.mdx | 5 +- .../postgres_for_kubernetes/1/pg4k.v1.mdx | 4 +- .../1/postgresql_conf.mdx | 21 +- .../1/private_edb_registry.mdx | 59 +- .../postgres_for_kubernetes/1/quickstart.mdx | 9 +- .../postgres_for_kubernetes/1/recovery.mdx | 92 +-- .../1/replica_cluster.mdx | 75 +-- .../1/samples/k9s/plugins.yml | 18 +- .../1/samples/monitoring/prometheusrule.yaml | 2 +- .../postgres_for_kubernetes/1/security.mdx | 66 +-- .../1/troubleshooting.mdx | 9 +- .../1/wal_archiving.mdx | 2 +- 28 files changed, 407 insertions(+), 725 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx index 3a2a3ce37c0..1bdd6a97161 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx @@ -76,7 +76,7 @@ specific to Kubernetes and PostgreSQL. : `kubectl` is the command-line tool used to manage a Kubernetes cluster. EDB Postgres for Kubernetes requires a Kubernetes version supported by the community. Please refer to the -[Supported releases](https://www.enterprisedb.com/resources/platform-compatibility#pgk8s) page for details. +["Supported releases"](/resources/platform-compatibility#pgk8s) page for details. ## PostgreSQL terminology diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index efef3625519..b5b642327ce 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -628,8 +628,15 @@ from a live cluster, just like the case of `initdb` and `recovery` bootstrap me If the new cluster is created as a replica cluster (with replica mode enabled), application database configuration will be skipped. -The following example configure the application database `app` with password in -supplied secret `app-secret` after bootstrap from a live cluster. +!!! Important + While the `Cluster` is in recovery mode, no changes to the database, + including the catalog, are permitted. This restriction includes any role + overrides, which are deferred until the `Cluster` transitions to primary. + During the recovery phase, roles remain as defined in the source cluster. + +The example below configures the `app` database with the owner `app` and +the password stored in the provided secret `app-secret`, following the +bootstrap from a live cluster. ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -645,19 +652,16 @@ spec: source: cluster-example ``` -With the above configuration, the following will happen after recovery is completed: +With the above configuration, the following will happen only **after recovery is +completed**: -1. if database `app` does not exist, a new database `app` will be created. -2. if user `app` does not exist, a new user `app` will be created. -3. if user `app` is not the owner of database, user `app` will be granted - as owner of database `app`. -4. If value of `username` match value of `owner` in secret, the password of - application database will be changed to the value of `password` in secret. - -!!! Important - For a replica cluster with replica mode enabled, the operator will not - create any database or user in the PostgreSQL instance, as these will be - recovered from the original cluster. +1. If the `app` database does not exist, it will be created. +2. If the `app` user does not exist, it will be created. +3. If the `app` user is not the owner of the `app` database, ownership will be + granted to the `app` user. +4. If the `username` value matches the `owner` value in the secret, the + password for the application user (the `app` user in this case) will be + updated to the `password` value in the secret. #### Current limitations diff --git a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx index 8b550eb893d..0a515fb9465 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx @@ -50,7 +50,7 @@ EDB Postgres for Kubernetes relies on [ephemeral volumes](https://kubernetes.io/ for part of the internal activities. Ephemeral volumes exist for the sole duration of a pod's life, without persisting across pod restarts. -### Volume Claim Template for Temporary Storage +# Volume Claim Template for Temporary Storage The operator uses by default an `emptyDir` volume, which can be customized by using the `.spec.ephemeralVolumesSizeLimit field`. This can be overridden by specifying a volume claim template in the `.spec.ephemeralVolumeSource` field. diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx index b2ac5abdc19..3943ef00288 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx @@ -89,6 +89,11 @@ deletion of the pooler, and vice versa. possible architectures. You can have clusters without poolers, clusters with a single pooler, or clusters with several poolers, that is, one per application. +!!! Important + When the operator is upgraded, the pooler pods will undergo a rolling + upgrade. This is necessary to ensure that the instance manager within the + pooler pods is also upgraded. + ## Security Any PgBouncer pooler is transparently integrated with EDB Postgres for Kubernetes support for @@ -286,6 +291,20 @@ spec: default_pool_size: "10" ``` +The operator by default adds a `ServicePort` with the following data: + +``` + ports: + - name: pgbouncer + port: 5432 + protocol: TCP + targetPort: pgbouncer +``` + +!!! Warning + Specifying a `ServicePort` with the name `pgbouncer` or the port `5432` will prevent the default `ServicePort` from being added. + This because `ServicePort` entries with the same `name` or `port` are not allowed on Kubernetes and result in errors. + ## High availability (HA) Because of Kubernetes' deployments, you can configure your pooler to run on a diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx index 6d57d72929f..ef13a56f932 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx @@ -41,7 +41,7 @@ EDB provides and supports for EDB Postgres for Kubernetes, and publishes them on [quay.io](https://quay.io/enterprisedb/postgresql). -## Image tag requirements +## Image Tag Requirements To ensure the operator makes informed decisions, it must accurately detect the PostgreSQL major version. This detection can occur in two ways: diff --git a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml index 309f6fd341a..107072299f2 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml +++ b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml @@ -83,6 +83,7 @@ data: , pg_catalog.age(datfrozenxid) AS xid_age , pg_catalog.mxid_age(datminmxid) AS mxid_age FROM pg_catalog.pg_database + WHERE datallowconn metrics: - datname: usage: "LABEL" @@ -247,6 +248,71 @@ data: usage: "COUNTER" description: "Number of buffers allocated" + pg_stat_bgwriter_17: + runonserver: ">=17.0.0" + name: pg_stat_bgwriter + query: | + SELECT buffers_clean + , maxwritten_clean + , buffers_alloc + , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time + FROM pg_catalog.pg_stat_bgwriter + metrics: + - buffers_clean: + usage: "COUNTER" + description: "Number of buffers written by the background writer" + - maxwritten_clean: + usage: "COUNTER" + description: "Number of times the background writer stopped a cleaning scan because it had written too many buffers" + - buffers_alloc: + usage: "COUNTER" + description: "Number of buffers allocated" + - stats_reset_time: + usage: "GAUGE" + description: "Time at which these statistics were last reset" + + pg_stat_checkpointer: + runonserver: ">=17.0.0" + query: | + SELECT num_timed AS checkpoints_timed + , num_requested AS checkpoints_req + , restartpoints_timed + , restartpoints_req + , restartpoints_done + , write_time + , sync_time + , buffers_written + , EXTRACT(EPOCH FROM stats_reset) AS stats_reset_time + FROM pg_catalog.pg_stat_checkpointer + metrics: + - checkpoints_timed: + usage: "COUNTER" + description: "Number of scheduled checkpoints that have been performed" + - checkpoints_req: + usage: "COUNTER" + description: "Number of requested checkpoints that have been performed" + - restartpoints_timed: + usage: "COUNTER" + description: "Number of scheduled restartpoints due to timeout or after a failed attempt to perform it" + - restartpoints_req: + usage: "COUNTER" + description: "Number of requested restartpoints that have been performed" + - restartpoints_done: + usage: "COUNTER" + description: "Number of restartpoints that have been performed" + - write_time: + usage: "COUNTER" + description: "Total amount of time that has been spent in the portion of processing checkpoints and restartpoints where files are written to disk, in milliseconds" + - sync_time: + usage: "COUNTER" + description: "Total amount of time that has been spent in the portion of processing checkpoints and restartpoints where files are synchronized to disk, in milliseconds" + - buffers_written: + usage: "COUNTER" + description: "Number of buffers written during checkpoints and restartpoints" + - stats_reset_time: + usage: "GAUGE" + description: "Time at which these statistics were last reset" + pg_stat_database: query: | SELECT datname diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx index a1aab1641cf..24771b9e34e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx @@ -8,7 +8,7 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime. !!! Important In case the failure scenario you are experiencing is not covered by this - section, please immediately contact EDB for support and assistance. + section, please immediately seek for [professional support](https://cloudnative-pg.io/support/). !!! Seealso "Postgres instance manager" Please refer to the ["Postgres instance manager" section](instance_manager.md) @@ -175,8 +175,8 @@ In the case of undocumented failure, it might be necessary to intervene to solve the problem manually. !!! Important - In such cases, please do not perform any manual operation without the - support and assistance of EDB engineering team. + In such cases, please do not perform any manual operation without + [professional support](https://cloudnative-pg.io/support/). From version 1.11.0 of the operator, you can use the `k8s.enterprisedb.io/reconciliationLoop` annotation to temporarily disable the diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png index 4498b094600..99ab9a72ff5 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png +++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/openshift-webconsole-multinamespace.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:71aa034bda86676c9fd963554b31422d66c88ada60f297b6bc20235532cc16e7 -size 65787 +oid sha256:a5219672700425805ac2550a57fbfa50b6538358417a4579e539a1449bf674aa +size 80596 diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png index d592adfe248..e0e65d7379a 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png +++ b/product_docs/docs/postgres_for_kubernetes/1/images/openshift/operatorhub_2.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:63a1a9791f35c9355c9b8f8c60caacd09b9894ae331f55c19d87185af8028efe -size 119923 +oid sha256:dc2d785d25376be97e97159f2b4721d02494894d47b2d5db7615218f95022c22 +size 105235 diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx index 6b4696cd2da..0d4067c1a94 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx @@ -79,8 +79,6 @@ and OpenShift. It is designed, developed, and supported by EDB and covers the full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. -EDB Postgres for Kubernetes was made generally available on February 4, 2021. Earlier versions were made available to selected customers prior to the GA release. - !!! Note The operator has been renamed from Cloud Native PostgreSQL. Existing users @@ -181,9 +179,9 @@ The following versions of Postgres are currently supported: - EDB Postgres Extended: 12 - 16 PostgreSQL and EDB Postgres Advanced are available on the following platforms: -`linux/amd64`, `linux/ppc64le`, `linux/s390x`. -In addition, PostgreSQL is also supported on `linux/arm64`. -EDB Postgres Extended is supported only on `linux/amd64`. +`linux/amd64`, `linux/ppc64le`, `linux/s390x`. \\ +In addition, PostgreSQL is also supported on `linux/arm64`. \\ +EDB Postgres Extended is supported only on `linux/amd64`. \\ EDB supports operand images for `linux/ppc64le` and `linux/s390x` architectures on OpenShift only. diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index 9f1c58f2054..caa8ddcfe62 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -23,12 +23,12 @@ The operator can be installed using the provided [Helm chart](https://github.com The operator can be installed like any other resource in Kubernetes, through a YAML manifest applied via `kubectl`. -You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.23.2.yaml) +You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.23.3.yaml) for this minor release as follows: ```sh kubectl apply --server-side -f \ - https://get.enterprisedb.io/cnp/postgresql-operator-1.23.2.yaml + https://get.enterprisedb.io/cnp/postgresql-operator-1.23.3.yaml ``` You can verify that with: diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx index 1cecf1cf86d..e1982245b0a 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx @@ -34,67 +34,52 @@ them in your systems. #### Debian packages -For example, let's install the 1.22.2 release of the plugin, for an Intel based +For example, let's install the 1.18.1 release of the plugin, for an Intel based 64 bit server. First, we download the right `.deb` file. ```sh -wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.deb +$ wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.deb ``` Then, install from the local file using `dpkg`: ```sh -dpkg -i kubectl-cnp_1.22.2_linux_x86_64.deb -__OUTPUT__ +$ dpkg -i kubectl-cnp_1.18.1_linux_x86_64.deb (Reading database ... 16102 files and directories currently installed.) -Preparing to unpack kubectl-cnp_1.22.2_linux_x86_64.deb ... -Unpacking cnp (1.22.2) over (1.22.2) ... -Setting up cnp (1.22.2) ... +Preparing to unpack kubectl-cnp_1.18.1_linux_x86_64.deb ... +Unpacking cnp (1.18.1) over (1.18.1) ... +Setting up cnp (1.18.1) ... ``` #### RPM packages -As in the example for `.deb` packages, let's install the 1.22.2 release for an +As in the example for `.deb` packages, let's install the 1.18.1 release for an Intel 64 bit machine. Note the `--output` flag to provide a file name. -``` sh -curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.rpm \ - --output kube-plugin.rpm +```sh +curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.rpm --output cnp-plugin.rpm ``` Then install with `yum`, and you're ready to use: ```sh -yum --disablerepo=* localinstall kube-plugin.rpm -__OUTPUT__ +$ yum --disablerepo=* localinstall cnp-plugin.rpm +yum --disablerepo=* localinstall cnp-plugin.rpm +Failed to set locale, defaulting to C.UTF-8 Dependencies resolved. -======================================================================================================================== - Package Architecture Version Repository Size -======================================================================================================================== +==================================================================================================== + Package Architecture Version Repository Size +==================================================================================================== Installing: - kubectl-cnp x86_64 1.22.2-1 @commandline 17 M + cnpg x86_64 1.18.1-1 @commandline 14 M Transaction Summary -======================================================================================================================== +==================================================================================================== Install 1 Package -Total size: 17 M -Installed size: 62 M +Total size: 14 M +Installed size: 43 M Is this ok [y/N]: y -Downloading Packages: -Running transaction check -Transaction check succeeded. -Running transaction test -Transaction test succeeded. -Running transaction - Preparing : 1/1 - Installing : kubectl-cnp-1.22.2-1.x86_64 1/1 - Verifying : kubectl-cnp-1.22.2-1.x86_64 1/1 - -Installed: - kubectl-cnp-1.22.2-1.x86_64 - -Complete! ``` ### Supported Architectures @@ -117,29 +102,6 @@ operating system and architectures: - arm 5/6/7 - arm64 -### Configuring auto-completion - -To configure [auto-completion](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/) for the plugin, a helper shell script needs to be -installed into your current PATH. Assuming the latter contains `/usr/local/bin`, -this can be done with the following commands: - -```shell -cat > kubectl_complete-cnp <..` format (e.g. `1.22.2`). The default empty value installs the version of the operator that matches the version of the plugin. +- `--version`: minor version of the operator to be installed, such as `1.17`. + If a minor version is specified, the plugin will install the latest patch + version of that minor version. If no version is supplied the plugin will + install the latest `MAJOR.MINOR.PATCH` version of the operator. - `--watch-namespace`: comma separated string containing the namespaces to watch (by default all namespaces) @@ -175,7 +140,7 @@ will install the operator, is as follows: ```shell kubectl cnp install generate \ -n king \ - --version 1.22.2 \ + --version 1.17 \ --replicas 3 \ --watch-namespace "albert, bb, freddie" \ > operator.yaml @@ -184,9 +149,9 @@ kubectl cnp install generate \ The flags in the above command have the following meaning: - `-n king` install the CNP operator into the `king` namespace -- `--version 1.22.2` install operator version 1.22.2 +- `--version 1.17` install the latest patch version for minor version 1.17 - `--replicas 3` install the operator with 3 replicas -- `--watch-namespace "albert, bb, freddie"` have the operator watch for +- `--watch-namespaces "albert, bb, freddie"` have the operator watch for changes in the `albert`, `bb` and `freddie` namespaces only ### Status @@ -222,7 +187,7 @@ Cluster in healthy state Name: sandbox Namespace: default System ID: 7039966298120953877 -PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: sandbox-2 Instances: 3 Ready instances: 3 @@ -267,7 +232,7 @@ Cluster in healthy state Name: sandbox Namespace: default System ID: 7039966298120953877 -PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: sandbox-2 Instances: 3 Ready instances: 3 @@ -757,89 +722,6 @@ items: "apiVersion": "postgresql.k8s.enterprisedb.io/v1", ``` -### Logs - -The `kubectl cnp logs` command allows to follow the logs of a collection -of pods related to EDB Postgres for Kubernetes in a single go. - -It has at the moment one available sub-command: `cluster`. - -#### Cluster logs - -The `cluster` sub-command gathers all the pod logs for a cluster in a single -stream or file. -This means that you can get all the pod logs in a single terminal window, with a -single invocation of the command. - -As in all the cnp plugin sub-commands, you can get instructions and help with -the `-h` flag: - -`kubectl cnp logs cluster -h` - -The `logs` command will display logs in JSON-lines format, unless the -`--timestamps` flag is used, in which case, a human readable timestamp will be -prepended to each line. In this case, lines will no longer be valid JSON, -and tools such as `jq` may not work as desired. - -If the `logs cluster` sub-command is given the `-f` flag (aka `--follow`), it -will follow the cluster pod logs, and will also watch for any new pods created -in the cluster after the command has been invoked. -Any new pods found, including pods that have been restarted or re-created, -will also have their pods followed. -The logs will be displayed in the terminal's standard-out. -This command will only exit when the cluster has no more pods left, or when it -is interrupted by the user. - -If `logs` is called without the `-f` option, it will read the logs from all -cluster pods until the time of invocation and display them in the terminal's -standard-out, then exit. -The `-o` or `--output` flag can be provided, to specify the name -of the file where the logs should be saved, instead of displaying over -standard-out. -The `--tail` flag can be used to specify how many log lines will be retrieved -from each pod in the cluster. By default, the `logs cluster` sub-command will -display all the logs from each pod in the cluster. If combined with the "follow" -flag `-f`, the number of logs specified by `--tail` will be retrieved until the -current time, and and from then the new logs will be followed. - -NOTE: unlike other `cnp` plugin commands, the `-f` is used to denote "follow" -rather than specify a file. This keeps with the convention of `kubectl logs`, -which takes `-f` to mean the logs should be followed. - -Usage: - -```shell -kubectl cnp logs cluster [flags] -``` - -Using the `-f` option to follow: - -```shell -kubectl cnp report cluster cluster-example -f -``` - -Using `--tail` option to display 3 lines from each pod and the `-f` option -to follow: - -```shell -kubectl cnp report cluster cluster-example -f --tail 3 -``` - -``` json -{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"cluster-example-3"} -{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] HINT: Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"cluster-example-3"} -… -… -``` - -With the `-o` option omitted, and with `--output` specified: - -``` sh -kubectl cnp logs cluster cluster-example --output my-cluster.log - -Successfully written logs to "my-cluster.log" -``` - ### Destroy The `kubectl cnp destroy` command helps remove an instance and all the @@ -944,16 +826,11 @@ kubectl cnp fio -n Refer to the [Benchmarking fio section](benchmarking.md#fio) for more details. -### Requesting a new physical backup +### Requesting a new base backup The `kubectl cnp backup` command requests a new physical base backup for an existing Postgres cluster by creating a new `Backup` resource. -!!! Info - From release 1.21, the `backup` command accepts a new flag, `-m` - to specify the backup method. - To request a backup using volume snapshots, set `-m volumeSnapshot` - The following example requests an on-demand backup for a given cluster: ```shell @@ -967,17 +844,10 @@ kubectl cnp backup cluster-example backup/cluster-example-20230121002300 created ``` -By default, a newly created backup will use the backup target policy defined -in the cluster to choose which instance to run on. -However, you can override this policy with the `--backup-target` option. - -In the case of volume snapshot backups, you can also use the `--online` option -to request an online/hot backup or an offline/cold one: additionally, you can -also tune online backups by explicitly setting the `--immediate-checkpoint` and -`--wait-for-archive` options. - -The ["Backup" section](./backup.md) contains more information about -the configuration settings. +By default, new created backup will use the backup target policy defined +in cluster to choose which instance to run on. You can also use `--backup-target` +option to override this policy. please refer to [Backup and Recovery](backup_recovery.md) +for more information about backup target. ### Launching psql @@ -992,7 +862,7 @@ it from the actual pod. This means that you will be using the `postgres` user. ```shell kubectl cnp psql cluster-example -psql (16.2 (Debian 16.2-1.pgdg110+1)) +psql (15.3) Type "help" for help. postgres=# @@ -1003,7 +873,7 @@ select to work against a replica by using the `--replica` option: ```shell kubectl cnp psql --replica cluster-example -psql (16.2 (Debian 16.2-1.pgdg110+1)) +psql (15.3) Type "help" for help. @@ -1031,335 +901,44 @@ kubectl cnp psql cluster-example -- -U postgres ### Snapshotting a Postgres cluster -!!! Warning - The `kubectl cnp snapshot` command has been removed. - Please use the [`backup` command](#requesting-a-new-physical-backup) to request - backups using volume snapshots. - -### Using pgAdmin4 for evaluation/demonstration purposes only - -[pgAdmin](https://www.pgadmin.org/) stands as the most popular and feature-rich -open-source administration and development platform for PostgreSQL. -For more information on the project, please refer to the official -[documentation](https://www.pgadmin.org/docs/). - -Given that the pgAdmin Development Team maintains official Docker container -images, you can install pgAdmin in your environment as a standard -Kubernetes deployment. - -!!! Important - Deployment of pgAdmin in Kubernetes production environments is beyond the - scope of this document and, more broadly, of the EDB Postgres for Kubernetes project. - -However, **for the purposes of demonstration and evaluation**, EDB Postgres for Kubernetes -offers a suitable solution. The `cnp` plugin implements the `pgadmin4` -command, providing a straightforward method to connect to a given database -`Cluster` and navigate its content in a local environment such as `kind`. - -For example, you can install a demo deployment of pgAdmin4 for the -`cluster-example` cluster as follows: - -```sh -kubectl cnp pgadmin4 cluster-example -``` - -This command will produce: - -```output -ConfigMap/cluster-example-pgadmin4 created -Deployment/cluster-example-pgadmin4 created -Service/cluster-example-pgadmin4 created -Secret/cluster-example-pgadmin4 created - -[...] -``` - -After deploying pgAdmin, forward the port using kubectl and connect -through your browser by following the on-screen instructions. - -![Screenshot of desktop installation of pgAdmin](images/pgadmin4.png) +The `kubectl cnp snapshot` creates consistent snapshots of a Postgres +`Cluster` by: -As usual, you can use the `--dry-run` option to generate the YAML file: - -```sh -kubectl cnp pgadmin4 --dry-run cluster-example -``` - -pgAdmin4 can be installed in either desktop or server mode, with the default -being server. - -In `server` mode, authentication is required using a randomly generated password, -and users must manually specify the database to connect to. - -On the other hand, `desktop` mode initiates a pgAdmin web interface without -requiring authentication. It automatically connects to the `app` database as the -`app` user, making it ideal for quick demos, such as on a local deployment using -`kind`: - -```sh -kubectl cnp pgadmin4 --mode desktop cluster-example -``` - -After concluding your demo, ensure the termination of the pgAdmin deployment by -executing: - -```sh -kubectl cnp pgadmin4 --dry-run cluster-example | kubectl delete -f - -``` - -!!! Warning - Never deploy pgAdmin in production using the plugin. - -### Logical Replication Publications - -The `cnp publication` command group is designed to streamline the creation and -removal of [PostgreSQL logical replication publications](https://www.postgresql.org/docs/current/logical-replication-publication.html). -Be aware that these commands are primarily intended for assisting in the -creation of logical replication publications, particularly on remote PostgreSQL -databases. +1. choosing a replica Pod to work on +2. fencing the replica +3. taking the snapshot +4. unfencing the replica !!! Warning - It is crucial to have a solid understanding of both the capabilities and - limitations of PostgreSQL's native logical replication system before using - these commands. - In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). - -#### Creating a new publication - -To create a logical replication publication, use the `cnp publication create` -command. The basic structure of this command is as follows: + A cluster already having a fenced instance cannot be snapshotted. -```sh -kubectl cnp publication create \ - --publication \ - [--external-cluster ] - [options] -``` +At the moment, this command can be used only for clusters having at least one +replica: that replica will be shut down by the fencing procedure to ensure the +snapshot to be consistent (cold backup). As the development of +declarative support for Kubernetes' `VolumeSnapshot` API continues, +this limitation will be removed, allowing you to take online backups +as business continuity requires. -There are two primary use cases: - -- With `--external-cluster`: Use this option to create a publication on an - external cluster (i.e. defined in the `externalClusters` stanza). The commands - will be issued from the ``, but the publication will be for the - data in ``. - -- Without `--external-cluster`: Use this option to create a publication in the - `` PostgreSQL `Cluster` (by default, the `app` database). - -!!! Warning - When connecting to an external cluster, ensure that the specified user has - sufficient permissions to execute the `CREATE PUBLICATION` command. - -You have several options, similar to the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html) -command, to define the group of tables to replicate. Notable options include: - -- If you specify the `--all-tables` option, you create a publication `FOR ALL TABLES`. -- Alternatively, you can specify multiple occurrences of: - - `--table`: Add a specific table (with an expression) to the publication. - - `--schema`: Include all tables in the specified database schema (available - from PostgreSQL 15). - -The `--dry-run` option enables you to preview the SQL commands that the plugin -will execute. - -For additional information and detailed instructions, type the following -command: - -```sh -kubectl cnp publication create --help -``` - -##### Example - -Given a `source-cluster` and a `destination-cluster`, we would like to create a -publication for the data on `source-cluster`. -The `destination-cluster` has an entry in the `externalClusters` stanza pointing -to `source-cluster`. - -We can run: - -``` sh -kubectl cnp publication create destination-cluster \ - --external-cluster=source-cluster --all-tables -``` - -which will create a publication for all tables on `source-cluster`, running -the SQL commands on the `destination-cluster`. - -Or instead, we can run: - -``` sh -kubectl cnp publication create source-cluster \ - --publication=app --all-tables -``` - -which will create a publication named `app` for all the tables in the -`source-cluster`, running the SQL commands on the source cluster. - -!!! Info - There are two sample files that have been provided for illustration and inspiration: - [logical-source](../samples/cluster-example-logical-source.yaml) and - [logical-destination](../samples/cluster-example-logical-destination.yaml). - -#### Dropping a publication - -The `cnp publication drop` command seamlessly complements the `create` command -by offering similar key options, including the publication name, cluster name, -and an optional external cluster. You can drop a `PUBLICATION` with the -following command structure: - -```sh -kubectl cnp publication drop \ - --publication \ - [--external-cluster ] - [options] -``` - -To access further details and precise instructions, use the following command: - -```sh -kubectl cnp publication drop --help -``` - -### Logical Replication Subscriptions - -The `cnp subscription` command group is a dedicated set of commands designed -to simplify the creation and removal of -[PostgreSQL logical replication subscriptions](https://www.postgresql.org/docs/current/logical-replication-subscription.html). -These commands are specifically crafted to aid in the establishment of logical -replication subscriptions, especially when dealing with remote PostgreSQL -databases. - -!!! Warning - Before using these commands, it is essential to have a comprehensive - understanding of both the capabilities and limitations of PostgreSQL's - native logical replication system. - In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). - -In addition to subscription management, we provide a helpful command for -synchronizing all sequences from the source cluster. While its applicability -may vary, this command can be particularly useful in scenarios involving major -upgrades or data import from remote servers. - -#### Creating a new subscription - -To create a logical replication subscription, use the `cnp subscription create` -command. The basic structure of this command is as follows: - -```sh -kubectl cnp subscription create \ - --subscription \ - --publication \ - --external-cluster \ - [options] -``` - -This command configures a subscription directed towards the specified -publication in the designated external cluster, as defined in the -`externalClusters` stanza of the ``. - -For additional information and detailed instructions, type the following -command: - -```sh -kubectl cnp subscription create --help -``` - -##### Example - -As in the section on publications, we have a `source-cluster` and a -`destination-cluster`, and we have already created a publication called -`app`. - -The following command: - -``` sh -kubectl cnp subscription create destination-cluster \ - --external-cluster=source-cluster \ - --publication=app --subscription=app -``` - -will create a subscription for `app` on the destination cluster. - -!!! Warning - Prioritize testing subscriptions in a non-production environment to ensure - their effectiveness and identify any potential issues before implementing them - in a production setting. - -!!! Info - There are two sample files that have been provided for illustration and inspiration: - [logical-source](../samples/cluster-example-logical-source.yaml) and - [logical-destination](../samples/cluster-example-logical-destination.yaml). - -#### Dropping a subscription - -The `cnp subscription drop` command seamlessly complements the `create` command. -You can drop a `SUBSCRIPTION` with the following command structure: - -```sh -kubectl cnp subcription drop \ - --subscription \ - [options] -``` - -To access further details and precise instructions, use the following command: - -```sh -kubectl cnp subscription drop --help -``` - -#### Synchronizing sequences - -One notable constraint of PostgreSQL logical replication, implemented through -publications and subscriptions, is the lack of sequence synchronization. This -becomes particularly relevant when utilizing logical replication for live -database migration, especially to a higher version of PostgreSQL. A crucial -step in this process involves updating sequences before transitioning -applications to the new database (*cutover*). - -To address this limitation, the `cnp subscription sync-sequences` command -offers a solution. This command establishes a connection with the source -database, retrieves all relevant sequences, and subsequently updates local -sequences with matching identities (based on database schema and sequence -name). - -You can use the command as shown below: +!!! Important + Even if the procedure will shut down a replica, the primary + Pod will not be involved. -```sh -kubectl cnp subscription sync-sequences \ - --subscription \ - -``` +The `kubectl cnp snapshot` command requires the cluster name: -For comprehensive details and specific instructions, utilize the following -command: +```shell +kubectl cnp snapshot cluster-example -```sh -kubectl cnp subscription sync-sequences --help +waiting for cluster-example-3 to be fenced +waiting for VolumeSnapshot cluster-example-3-1682539624 to be ready to use +unfencing pod cluster-example-3 ``` -##### Example +The `VolumeSnapshot` resource will be created with an empty +`VolumeSnapshotClass` reference. That resource is intended by be used by the +`VolumeSnapshotClass` configured as default. -As in the previous sections for publication and subscription, we have -a `source-cluster` and a `destination-cluster`. The publication and the -subscription, both called `app`, are already present. +A specific `VolumeSnapshotClass` can be requested via the `-c` option: -The following command will synchronize the sequences involved in the -`app` subscription, from the source cluster into the destination cluster. - -``` sh -kubectl cnp subscription sync-sequences destination-cluster \ - --subscription=app +```shell +kubectl cnp snapshot cluster-example -c longhorn ``` - -!!! Warning - Prioritize testing subscriptions in a non-production environment to - guarantee their effectiveness and detect any potential issues before deploying - them in a production setting. - -## Integration with K9s - -The `cnp` plugin can be easily integrated in [K9s](https://k9scli.io/), a -popular terminal-based UI to interact with Kubernetes clusters. - -See [`k9s/plugins.yml`](../samples/k9s/plugins.yml) for details. diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx index 55805a60e80..99badbeaca3 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx @@ -171,16 +171,20 @@ These predefined annotations are managed by EDB Postgres for Kubernetes. : Current status of the PVC: `initializing`, `ready`, or `detached`. `k8s.enterprisedb.io/reconcilePodSpec` -: When set to `disabled` on a `Cluster`, the operator prevents instances - from being restarted in case of drift in the PodSpec. - PodSpec drift could be due, for example, to: +: Annotation can be applied to a `Cluster` or `Pooler` to prevent restarts. + + When set to `disabled` on a `Cluster`, the operator prevents instances + from restarting due to changes in the PodSpec. This includes changes to: ``` - - Changes to topology or affinity - - Change of scheduler - - Change to volumes or containers + - Topology or affinity + - Scheduler + - Volumes or containers ``` + When set to `disabled` on a `Pooler`, the operator restricts any modifications + to the deployment specification, except for changes to `spec.instances`. + `k8s.enterprisedb.io/reconciliationLoop` : When set to `disabled` on a `Cluster`, the operator prevents the reconciliation loop from running. diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx index 1df83d73de1..6b72fc1e955 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx @@ -17,7 +17,6 @@ Each log entry has the following fields: - `logging_podName` – The pod where the log was created. !!! Warning - Long-term storage and management of logs is outside the operator's purview, and needs to be provided at the level of the Kubernetes installation. See the @@ -25,7 +24,6 @@ Each log entry has the following fields: documentation. !!! Info - If your log ingestion system requires it, you can rename the `level` and `ts` field names using the `log-field-level` and `log-field-timestamp` flags of the operator controller. Edit the `Deployment` definition of the `cloudnative-pg` operator. @@ -93,7 +91,6 @@ To enable this support, add the required `pgaudit` parameters to the `postgresql section in the configuration of the cluster. !!! Important - You need to add the PGAudit library to `shared_preload_libraries`. EDB Postgres for Kubernetes adds the library based on the presence of `pgaudit.*` parameters in the postgresql configuration. @@ -104,7 +101,6 @@ The operator also takes care of creating and removing the extension from all the available databases in the cluster. !!! Important - EDB Postgres for Kubernetes runs the `CREATE EXTENSION` and `DROP EXTENSION` commands in all databases in the cluster that accept connections. @@ -185,7 +181,7 @@ for more details about each field in a record. ## EDB Audit logs Clusters that are running on EDB Postgres Advanced Server (EPAS) -can enable [EDB Audit](/epas/latest/epas_security_guide/05_edb_audit_logging/) as follows: +can enable [EDB Audit](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) as follows: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -268,7 +264,7 @@ See the example below: } ``` -See EDB [Audit file](/epas/latest/epas_security_guide/05_edb_audit_logging/) +See EDB [Audit file](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) for more details about the records' fields. ## Other logs diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx index 76e0db778d2..e0dca3758a5 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx @@ -542,6 +542,7 @@ Every custom query has the following basic structure: Here is a short description of all the available fields: - ``: the name of the Prometheus metric + - `name`: override ``, if defined - `query`: the SQL query to run on the target database to generate the metrics - `primary`: whether to run the query only on the primary instance - `master`: same as `primary` (for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated) @@ -552,6 +553,7 @@ Here is a short description of all the available fields: to enable auto discovery. Overwrites the default database if provided. - `metrics`: section containing a list of all exported columns, defined as follows: - ``: the name of the column returned by the query + - `name`: override the `ColumnName` of the column in the metric, if defined - `usage`: one of the values described below - `description`: the metric's description - `metrics_mapping`: the optional column mapping when `usage` is set to `MAPPEDMETRIC` diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx index d199cac2ed9..1828a7bcd23 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx @@ -41,10 +41,10 @@ and ["Users and Permissions"](#users-and-permissions) below). !!! Important Both the installation and upgrade processes require access to an OpenShift Container Platform cluster using an account with `cluster-admin` permissions. - From ["Default cluster roles"](https://docs.openshift.com/container-platform/4.9/authentication/using-rbac.html#default-roles_using-rbac), + From ["Default cluster roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-rbac.html#default-roles_using-rbac), a `cluster-admin` is *"a super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over - quota and every action on every resource in the project*". + quota and every action on every resource in the project"*. ## Architecture @@ -118,11 +118,11 @@ selected installation method. Otherwise, we recommend that you read the following resources taken from the OpenShift documentation and the Red Hat blog: -- ["Operator Lifecycle Manager (OLM) concepts and resources"](https://docs.openshift.com/container-platform/4.9/operators/understanding/olm/olm-understanding-olm.html) -- ["Understanding authentication"](https://docs.openshift.com/container-platform/4.9/authentication/understanding-authentication.html) -- ["Role-based access control (RBAC)"](https://docs.openshift.com/container-platform/4.9/authentication/using-rbac.html), +- ["Operator Lifecycle Manager (OLM) concepts and resources"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-olm.html) +- ["Understanding authentication"](https://docs.openshift.com/container-platform/4.16/authentication/understanding-authentication.html) +- ["Role-based access control (RBAC)"](https://docs.openshift.com/container-platform/4.16/authentication/using-rbac.html), covering rules, roles and bindings for authorization, as well as cluster RBAC vs local RBAC through projects -- ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.9/authentication/using-service-accounts-in-applications.html#default-service-accounts-and-roles_using-service-accounts) +- ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-service-accounts-in-applications.html#service-accounts-default_using-service-accounts) - ["With Kubernetes Operators comes great responsibility" blog article](https://www.redhat.com/en/blog/kubernetes-operators-comes-great-responsibility) ### Cluster Service Version (CSV) @@ -140,8 +140,8 @@ for the operator, namely: `AllNamespaces` (cluster-wide), `SingleNamespace` !!! Seealso "There's more ..." You can find out more about CSVs and install modes by reading - ["Operator group membership"](https://docs.openshift.com/container-platform/4.9/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-membership_olm-understanding-operatorgroups) - and ["Defining cluster service versions (CSVs)"](https://docs.openshift.com/container-platform/4.9/operators/operator_sdk/osdk-generating-csvs.html) + ["Operator group membership"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-membership_olm-understanding-operatorgroups) + and ["Defining cluster service versions (CSVs)"](https://docs.openshift.com/container-platform/4.16/operators/operator_sdk/osdk-generating-csvs.html) from the OpenShift documentation. ### Limitations for multi-tenant management @@ -158,7 +158,7 @@ different namespaces, with one important limitation: they all need to share the same API version of the operator. For more information, please refer to -["Operator groups"](https://docs.openshift.com/container-platform/4.9/operators/understanding/olm/olm-understanding-operatorgroups.html) +["Operator groups"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html) in OpenShift documentation. ## Channels @@ -275,7 +275,7 @@ Operator in a given namespace, and to make it available to that project only. This doesn't mean that every user in the namespace can use the EDB Postgres for Kubernetes Operator, deploy a `Cluster` object or even see the `Cluster` objects that are running in the namespace. Similarly to the cluster-wide installation mode, - There are some special roles that users must have in the namespace in order to + there are some special roles that users must have in the namespace in order to interact with EDB Postgres for Kubernetes' managed custom resources - primarily the `Cluster` one. Please refer to the ["Users and Permissions" section below](#users-and-permissions) for details. @@ -318,7 +318,7 @@ namespaces. !!! Warning Multiple namespace installation is currently supported by OpenShift. - However, [definition of multiple target namespaces for an operator may be removed in future versions of OpenShift](https://docs.openshift.com/container-platform/4.9/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups). + However, [definition of multiple target namespaces for an operator may be removed in future versions of OpenShift](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups). This section primarily covers the installation of the operator in multiple projects with a simple example, by creating an `OperatorGroup` and a @@ -363,7 +363,7 @@ projects with a simple example, by creating an `OperatorGroup` and a !!! Important Alternatively, you can list namespaces using a label selector, as explained in - ["Target namespace selection"](https://docs.openshift.com/container-platform/4.9/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups). + ["Target namespace selection"](https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-operatorgroups.html#olm-operatorgroups-target-namespace_olm-understanding-operatorgroups). 4. Create a `Subscription` object in the `my-operators` namespace to subscribe to the `fast` channel of the `cloud-native-postgresql` operator that is @@ -440,7 +440,7 @@ suits your needs in terms of operating system and architecture: !!! Seealso "OpenShift CLI" For more detailed and updated information, please refer to the official - [OpenShift CLI documentation](https://docs.openshift.com/container-platform/4.9/cli_reference/openshift_cli/getting-started-cli.html) + [OpenShift CLI documentation](https://docs.openshift.com/container-platform/4.16/cli_reference/openshift_cli/getting-started-cli.html) directly maintained by Red Hat. ## Upgrading the operator @@ -505,7 +505,9 @@ which returns something similar to: ```console backups.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ +clusterimagecatalogs.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ clusters.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ +imagecatalogs.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ poolers.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ scheduledbackups.postgresql.k8s.enterprisedb.io 20YY-MM-DDTHH:MM:SSZ ``` @@ -539,7 +541,7 @@ postgresql-operator-manager 2 ... The `default` service account is automatically created by Kubernetes and present in every namespace. The `builder` and `deployer` service accounts are -automatically created by OpenShift (see ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.9/authentication/using-service-accounts-in-applications.html#default-service-accounts-and-roles_using-service-accounts)). +automatically created by OpenShift (see ["Default project service accounts and roles"](https://docs.openshift.com/container-platform/4.16/authentication/using-service-accounts-in-applications.html#default-service-accounts-and-roles_using-service-accounts)). The `postgresql-operator-manager` service account is the one used by the Cloud Native PostgreSQL operator to work as part of the Kubernetes/OpenShift control @@ -591,10 +593,18 @@ backups.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:M backups.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ backups.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ cloud-native-postgresql.VERSION-HASH YYYY-MM-DDTHH:MM:SSZ +clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ +clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ +clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ +clusterimagecatalogs.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ clusters.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ clusters.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ clusters.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ clusters.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ +imagecatalogs.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ +imagecatalogs.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ +imagecatalogs.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ +imagecatalogs.postgresql.k8s.enterprisedb.io-v1-view YYYY-MM-DDTHH:MM:SSZ poolers.postgresql.k8s.enterprisedb.io-v1-admin YYYY-MM-DDTHH:MM:SSZ poolers.postgresql.k8s.enterprisedb.io-v1-crdview YYYY-MM-DDTHH:MM:SSZ poolers.postgresql.k8s.enterprisedb.io-v1-edit YYYY-MM-DDTHH:MM:SSZ @@ -822,8 +832,8 @@ Please pay close attention to the following table and notes: | EDB Postgres for Kubernetes Version | OpenShift Versions | Supported SCC | | ----------------------------------- | ------------------ | ------------------------- | -| 1.23.x | 4.12-4.14 | restricted, restricted-v2 | -| 1.22.x | 4.12-4.14 | restricted, restricted-v2 | +| 1.23.x | 4.12-4.16 | restricted, restricted-v2 | +| 1.22.x | 4.12-4.16 | restricted, restricted-v2 | | 1.18.x | 4.10-4.13 | restricted, restricted-v2 | !!! Important @@ -907,14 +917,14 @@ included with OpenShift. In this section, we show you how to get started with basic observability, leveraging the default OpenShift installation. -Please refer to the [OpenShift monitoring stack overview](https://docs.openshift.com/container-platform/4.11/monitoring/monitoring-overview.html) +Please refer to the [OpenShift monitoring stack overview](https://docs.openshift.com/container-platform/4.16/observability/monitoring/monitoring-overview.html) for further background. Depending on your OpenShift configuration, you may need to do a bit of setup before you can monitor your EDB Postgres for Kubernetes clusters. You will need to have your OpenShift configured to -[enable monitoring for user-defined projects](https://docs.openshift.com/container-platform/4.11/monitoring/enabling-monitoring-for-user-defined-projects.html). +[enable monitoring for user-defined projects](https://docs.openshift.com/container-platform/4.16/observability/monitoring/enabling-monitoring-for-user-defined-projects.html). You should check, perhaps with your OpenShift administrator, if your installation has the `cluster-monitoring-config` configMap, and if so, @@ -995,7 +1005,7 @@ The `monitoring-rules-edit` or at least `monitoring-rules-view` roles should be assigned for the user wishing to apply and monitor the rules. This involves creating a RoleBinding with that permission, for a namespace. -Again, refer to the [relevant OpenShift document page](https://docs.openshift.com/container-platform/4.11/monitoring/enabling-monitoring-for-user-defined-projects.html) +Again, refer to the [relevant OpenShift document page](https://docs.openshift.com/container-platform/4.16/observability/monitoring/enabling-monitoring-for-user-defined-projects.html) for further detail. Specifically, the *Granting user permissions by using the web console* section should be of interest. diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx index b9e00de5c7c..e73ef65fcc1 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx @@ -465,10 +465,7 @@ extending across multiple data centers and facilitating hybrid and multi-cloud setups. (While anticipating Kubernetes federation native capabilities, manual switchover across data centers remains necessary.) -Additionally, the flexibility extends to creating delayed replica clusters -intentionally lagging behind the primary cluster. This intentional lag aims to -minimize the Recovery Time Objective (RTO) in the event of unintended errors, -such as incorrect `DELETE` or `UPDATE` SQL operations. + ### Tablespace support diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx index 42a83ada2e2..f6c6f85aa9b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx @@ -4452,8 +4452,8 @@ standby, if available.

BackupMethod -

The backup method to be used, possible options are barmanObjectStore -and volumeSnapshot. Defaults to: barmanObjectStore.

+

The backup method to be used, possible options are barmanObjectStore, +volumeSnapshot or plugin. Defaults to: barmanObjectStore.

pluginConfiguration
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx index 35c66161607..937df61fdba 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx @@ -476,11 +476,10 @@ exclusive method for altering the configuration of a PostgreSQL cluster. This approach guarantees coherence across the entire high-availability cluster and aligns with best practices for Infrastructure-as-Code. -In EDB Postgres for Kubernetes version 1.22 and onwards, the default configuration disables -the use of `ALTER SYSTEM` on new Postgres clusters. This decision is rooted in -the recognition of potential risks associated with this command. To enable the -use of `ALTER SYSTEM`, you can explicitly set `.spec.postgresql.enableAlterSystem` -to `true`. +In EDB Postgres for Kubernetes the default configuration disables the use of `ALTER SYSTEM` +on new Postgres clusters. This decision is rooted in the recognition of +potential risks associated with this command. To enable the use of `ALTER SYSTEM`, +you can explicitly set `.spec.postgresql.enableAlterSystem` to `true`. !!! Warning Proceed with caution when utilizing `ALTER SYSTEM`. This command operates @@ -488,9 +487,14 @@ to `true`. EDB Postgres for Kubernetes assumes responsibility for certain fixed parameters and complete control over others, emphasizing the need for careful consideration. -When `.spec.postgresql.enableAlterSystem` is configured as `false`, any attempt -to execute `ALTER SYSTEM` will result in an error. The error message might -resemble the following: +Starting from PostgreSQL 17, the `.spec.postgresql.enableAlterSystem` setting +directly controls the [`allow_alter_system` GUC in PostgreSQL](https://www.postgresql.org/docs/17/runtime-config-compatible.html#GUC-ALLOW-ALTER-SYSTEM) +— a feature directly contributed by EDB Postgres for Kubernetes to PostgreSQL. + +Prior to PostgreSQL 17, when `.spec.postgresql.enableAlterSystem` is set to +`false`, the `postgresql.auto.conf` file is made read-only. Consequently, any +attempt to execute the `ALTER SYSTEM` command will result in an error. The +error message might look like this: ```output ERROR: could not open file "postgresql.auto.conf": Permission denied @@ -581,6 +585,7 @@ operator. The operator prevents the user from setting them using a webhook. Users are not allowed to set the following configuration parameters in the `postgresql` section: +- `allow_alter_system` - `allow_system_table_mods` - `archive_cleanup_command` - `archive_command` diff --git a/product_docs/docs/postgres_for_kubernetes/1/private_edb_registry.mdx b/product_docs/docs/postgres_for_kubernetes/1/private_edb_registry.mdx index 98dd951ff9d..a02c6734875 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/private_edb_registry.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/private_edb_registry.mdx @@ -51,7 +51,7 @@ as `Repos 2.0 token`. Next to the token you'll find a button to copy the token, and an eye icon in case you want to view the content of the token as clear text. -The token shall be used as the *Password* when you try to login to EDB +The token shall be used as the *Password* when you try to access the EDB container registry. ### Example with `docker login` @@ -75,8 +75,8 @@ Login Succeeded EDB Postgres for Kubernetes supports various PostgreSQL distributions that have images available from the same private registries: -- EDB Postgres Advanced -- EDB Postgres Extended +- EDB Postgres Advanced (EPAS) +- EDB Postgres Extended (PGE) !!! Note PostgreSQL images are not available in the private registries, but are @@ -89,10 +89,55 @@ page of the EDB Postgres for Kubernetes documentation. In the table below you can find the image name prefix for each Postgres distribution: -| Postgres distribution | Image name | Repositories | -| --------------------- | ----------------------- | ---------------- | -| EDB Postgres Extended | `edb-postgres-extended` | `k8s_standard` | -| EDB Postgres Advanced | `edb-postgres-advanced` | `k8s_enterprise` | +| Postgres distribution | Image name | Repositories | +| ---------------------------- | ----------------------- | -------------------------------- | +| EDB Postgres Extended (PGE) | `edb-postgres-extended` | `k8s_standard`, `k8s_enterprise` | +| EDB Postgres Advanced (EPAS) | `edb-postgres-advanced` | `k8s_enterprise` | + +## How to deploy clusters with EPAS or PGE operands + +If you have already installed the EDB Postgres for Kubernetes operator from the +private registry, you must have already set up an image pull secret. If you +haven't, the next section may be of interest to you. + +If you have an existing installation of the operator, in order to pull images +for EPAS or PGE from the private registry, you will need to create a +[`kubernetes.io/dockerconfigjson` pull secret](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types). + +You can +[create a pull secret from credentials](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line). + +```sh +kubectl create secret docker-registry registry-pullsecret \ + -n --docker-server=docker.enterprisedb.com \ + --docker-username= \ + --docker-password= +``` + +As mentioned above, the `docker-username` is the name of your registry, i.e. +`k8s_standard` or `k8s_enterprise`. The `docker-password` is the token retrieved +from the [EDB portal](#how-to-retrieve-the-token). + +Once your pull secret is created, remember to set the `imagePullSecrets` field +in the cluster manifest in addition to the `imageName`. +The manifest below will create a cluster running PG Extended from the +`k8s_enterprise` repository. + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: postgresql-extended-cluster +spec: + instances: 3 + imageName: docker.enterprisedb.com/k8s_enterprise/edb-postgres-extended:16.2 + imagePullSecrets: + - name: registry-pullsecret + + storage: + storageClass: standard + size: 1Gi +``` ## How to install the operator using the EDB private registry diff --git a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx index ea9308ce07f..369e0cfbef5 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx @@ -1,8 +1,6 @@ --- title: 'Quickstart' originalFilePath: 'src/quickstart.md' -redirects: - - ../interactive_demo/ --- This section guides you through testing a PostgreSQL cluster on your local machine by @@ -133,6 +131,13 @@ spec: size: 1Gi ``` +!!! Note "Installing other operands" + EDB Postgres for Kubernetes supports not just PostgreSQL, but EDB Postgres + Extended (PGE) and EDB Postgres Advanced (EPAS). + The images for those operands are kept in private registries. Please refer + to the [private registry](private_edb_registry.md) document for instructions + on deploying clusters using PGE or EPAS as operands. + !!! Note "There's more" For more detailed information about the available options, please refer to the ["API Reference" section](pg4k.v1.md). diff --git a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx index 9d87c2c2d4a..39b6598642f 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx @@ -97,6 +97,11 @@ spec: maxParallel: 8 ``` +The previous example assumes that the application database and its owning user +are named `app` by default. If the PostgreSQL cluster being restored uses +different names, you must specify these names before exiting the recovery phase, +as documented in ["Configure the application database"](#configure-the-application-database). + !!! Important By default, the `recovery` method strictly uses the `name` of the cluster in the `externalClusters` section as the name of the main folder @@ -167,6 +172,11 @@ spec: apiGroup: snapshot.storage.k8s.io ``` +The previous example assumes that the application database and its owning user +are named `app` by default. If the PostgreSQL cluster being restored uses +different names, you must specify these names before exiting the recovery phase, +as documented in ["Configure the application database"](#configure-the-application-database). + !!! Warning If bootstrapping a replica-mode cluster from snapshots, to leverage snapshots for the standby instances and not just the primary, @@ -203,26 +213,29 @@ spec: This bootstrap method allows you to specify just a reference to the backup that needs to be restored. -The previous example implies the application database and its owning user is -the default one, `app`. If the PostgreSQL cluster being restored was using -different names, you can specify them as documented in [Configure the -application database](#configure-the-application-database). +The previous example assumes that the application database and its owning user +are named `app` by default. If the PostgreSQL cluster being restored uses +different names, you must specify these names before exiting the recovery phase, +as documented in ["Configure the application database"](#configure-the-application-database). -## Additional considerations +## Additional Considerations -Whether you recover from a recovery object store, a volume snapshot, or an -existing `Backup` resource, the following considerations apply: +Whether you recover from an object store, a volume snapshot, or an existing +`Backup` resource, no changes to the database, including the catalog, are +permitted until the `Cluster` is fully promoted to primary and accepts write +operations. This restriction includes any role overrides, which are deferred +until the `Cluster` transitions to primary. +As a result, the following considerations apply: -- The application database name and the application database user are preserved - from the backup that's being restored. The operator doesn't currently attempt - to back up the underlying secrets, as this is part of the usual maintenance - activity of the Kubernetes cluster. -- To preserve the original postgres user password, you need to properly - configure `enableSuperuserAccess` and supply a `superuserSecret`. -- By default, the recovery continues up to the latest - available WAL on the default target timeline (`latest`). - You can optionally specify a `recoveryTarget` to perform a point-in-time - recovery (see [Point in time recovery (PITR)](#point-in-time-recovery-pitr)). +- The application database name and user are copied from the backup being + restored. The operator does not currently back up the underlying secrets, as + this is part of the usual maintenance activity of the Kubernetes cluster. +- To preserve the original postgres user password, configure + `enableSuperuserAccess` and supply a `superuserSecret`. + +By default, recovery continues up to the latest available WAL on the default +target timeline (`latest`). You can optionally specify a `recoveryTarget` to +perform a point-in-time recovery (see [Point in Time Recovery (PITR)](#point-in-time-recovery-pitr)). !!! Important Consider using the `barmanObjectStore.wal.maxParallel` option to speed @@ -391,8 +404,7 @@ targetImmediate The operator can retrieve the closest backup when you specify either `targetTime` or `targetLSN`. However, this isn't possible for the remaining targets: `targetName`, `targetXID`, and `targetImmediate`. In such cases, it's - important to specify `backupID`, unless the last available backup in the - catalog is acceptable. + mandatory to specify `backupID`. This example uses a `targetName`-based recovery target: @@ -468,8 +480,15 @@ generate a secret with a randomly secure password for use. See [Bootstrap an empty cluster](bootstrap.md#bootstrap-an-empty-cluster-initdb) for more information about secrets. -This example configures the application database `app` with owner `app` and -supplied secret `app-secret`. +!!! Important + While the `Cluster` is in recovery mode, no changes to the database, + including the catalog, are permitted. This restriction includes any role + overrides, which are deferred until the `Cluster` transitions to primary. + During this phase, users remain as defined in the source cluster. + +The following example configures the `app` database with the owner `app` and +the password stored in the provided secret `app-secret`, following the +bootstrap from a live cluster. ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -485,20 +504,16 @@ spec: [...] ``` -With this configuration, the following happens after recovery is complete: - -1. If database `app` doesn't exist, a new database `app` is created. -2. If user `app` doesn't exist, a new user `app` is created. -3. if user `app` isn't the owner of the database, user `app` is granted - as owner of database `app`. -4. If the value of `username` matches the value of `owner` in the secret, the - password of application database is changed to the value of `password` in the - secret. +With the above configuration, the following will happen only **after recovery is +completed**: -!!! Important - For a replica cluster with replica mode enabled, the operator doesn't - create any database or user in the PostgreSQL instance. These are - recovered from the original cluster. +1. If the `app` database does not exist, it will be created. +2. If the `app` user does not exist, it will be created. +3. If the `app` user is not the owner of the `app` database, ownership will be + granted to the `app` user. +4. If the `username` value matches the `owner` value in the secret, the + password for the application user (the `app` user in this case) will be + updated to the `password` value in the secret. ## How recovery works under the hood @@ -592,11 +607,10 @@ could be overwritten by the new cluster. !!! Warning The operator includes a safety check to ensure a cluster doesn't overwrite - -a storage bucket that contained information. A cluster that would overwrite -existing storage remains in the state `Setting up primary` with pods in an -error state. The pod logs show: `ERROR: WAL archive check failed for server -recoveredCluster: Expected empty archive`. + a storage bucket that contained information. A cluster that would overwrite + existing storage remains in the state `Setting up primary` with pods in an + error state. The pod logs show: `ERROR: WAL archive check failed for server + recoveredCluster: Expected empty archive`. !!! Important If you set the `k8s.enterprisedb.io/skipEmptyWalArchiveCheck` annotation to `enabled` diff --git a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx index 14be10f32c4..e8c5b753880 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx @@ -116,6 +116,13 @@ Note the `bootstrap` and `replica` sections pointing to the source cluster. source: cluster-example ``` +The previous configuration assumes that the application database and its owning +user are set to the default, `app`. If the PostgreSQL cluster being restored +uses different names, you must specify them as documented in [Configure the application database](bootstrap.md#configure-the-application-database). +You should also consider copying over the application user secret from +the original cluster and keep it synchronized with the source. +See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. + In the `externalClusters` section, remember to use the right namespace for the host in the `connectionParameters` sub-section. The `-replication` and `-ca` secrets should have been copied over if necessary, @@ -162,6 +169,13 @@ Note the `bootstrap` and `replica` sections pointing to the source cluster. source: cluster-example ``` +The previous configuration assumes that the application database and its owning +user are set to the default, `app`. If the PostgreSQL cluster being restored +uses different names, you must specify them as documented in [Configure the application database](recovery.md#configure-the-application-database). +You should also consider copying over the application user secret from +the original cluster and keep it synchronized with the source. +See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. + In the `externalClusters` section, take care to use the right namespace in the `endpointURL` and the `connectionParameters.host`. And do ensure that the necessary secrets have been copied if necessary, and that @@ -205,6 +219,14 @@ store to fetch the WAL files. You can check the [sample YAML](../samples/cluster-example-replica-from-volume-snapshot.yaml) for it in the `samples/` subdirectory. +The example assumes that the application database and its owning +user are set to the default, `app`. If the PostgreSQL cluster being restored +uses different names, you must specify them as documented in [Configure the +application database](recovery.md#configure-the-application-database). +You should also consider copying over the application user secret from +the original cluster and keep it synchronized with the source. +See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. + ## Demoting a Primary to a Replica Cluster EDB Postgres for Kubernetes provides the functionality to demote a primary cluster to a @@ -261,56 +283,3 @@ kubectl cnp -n status cluster-eu-central and the source cluster become two independent clusters definitively. Ensure to follow the demotion procedure correctly to avoid unintended consequences. -## Delayed replicas - -In addition to standard replica clusters, our system supports the creation of -**delayed replicas** through the utilization of PostgreSQL's -[`recovery_min_apply_delay`](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-RECOVERY-MIN-APPLY-DELAY) -option. - -Delayed replicas intentionally lag behind the primary database by a specified -amount of time. This delay is configurable using the `recovery_min_apply_delay` -option in PostgreSQL. The primary objective of introducing delayed replicas is -to mitigate the impact of unintentional executions of SQL statements on the -primary database. This is particularly useful in scenarios where an incorrect -or missing `WHERE` clause is used in operations such as `UPDATE` or `DELETE`. - -To introduce a delay in a replica cluster, adjust the -`recovery_min_apply_delay` option. This parameter determines the time by which -replicas lag behind the primary. For example: - -```yaml - # ... - postgresql: - parameters: - # Enforce a delay of 8 hours - recovery_min_apply_delay = '8h' - # ... -``` - -Monitor and adjust the delay as needed based on your recovery time objectives -and the potential impact of unintended primary database operations. - -The main use cases of delayed replicas can be summarized into: - -1. mitigating human errors: reduce the risk of data corruption or loss - resulting from unintentional SQL operations on the primary database - -2. recovery time optimization: facilitate quicker recovery from unintended - changes by having a delayed replica that allows you to identify and rectify - issues before changes are applied to other replicas. - -3. enhanced data protection: safeguard critical data by introducing a time - buffer that provides an opportunity to intervene and prevent the propagation of - undesirable changes. - -By integrating delayed replicas into your replication strategy, you can enhance -the resilience and data protection capabilities of your PostgreSQL environment. -Adjust the delay duration based on your specific needs and the criticality of -your data. - -!!! Important - Always measure your goals. Depending on your environment, it might be more - efficient to rely on volume snapshot-based recovery for faster outcomes. - Evaluate and choose the approach that best aligns with your unique requirements - and infrastructure. diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml b/product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml index 6a247daebf2..76974204e1b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/k9s/plugins.yml @@ -1,4 +1,4 @@ -# Move/add to $XDG_CONFIG_HOME/k9s/plugin.yml +# Move/add to $XDG_CONFIG_HOME/k9s/plugins.yaml # Requires the cnp kubectl plugin. See https://cloudnative-pg.io/documentation/current/kubectl-plugin/ # # Cluster actions: @@ -26,7 +26,7 @@ plugins: background: false args: - -c - - "kubectl cnp backup $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp backup $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-hibernate-status: shortCut: h description: Hibernate status @@ -36,7 +36,7 @@ plugins: background: false args: - -c - - "kubectl cnp hibernate status $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp hibernate status $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-hibernate: shortCut: Shift-H description: Hibernate @@ -47,7 +47,7 @@ plugins: background: false args: - -c - - "kubectl cnp hibernate on $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp hibernate on $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-hibernate-off: shortCut: Shift-H description: Wake up hibernated cluster in this namespace @@ -58,7 +58,7 @@ plugins: background: false args: - -c - - "kubectl cnp hibernate off $NAME -n $NAME --context $CONTEXT |& less -R" + - "kubectl cnp hibernate off $NAME -n $NAME --context \"$CONTEXT\" |& less -R" postgresql-operator-logs: shortCut: l description: Logs @@ -89,7 +89,7 @@ plugins: background: false args: - -c - - "kubectl cnp reload $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp reload $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-restart: shortCut: Shift-R description: Restart @@ -100,7 +100,7 @@ plugins: background: false args: - -c - - "kubectl cnp restart $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp restart $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-status: shortCut: s description: Status @@ -110,7 +110,7 @@ plugins: background: false args: - -c - - "kubectl cnp status $NAME -n $NAMESPACE --context $CONTEXT |& less -R" + - "kubectl cnp status $NAME -n $NAMESPACE --context \"$CONTEXT\" |& less -R" postgresql-operator-status-verbose: shortCut: Shift-S description: Status (verbose) @@ -120,4 +120,4 @@ plugins: background: false args: - -c - - "kubectl cnp status $NAME -n $NAMESPACE --context $CONTEXT --verbose |& less -R" + - "kubectl cnp status $NAME -n $NAMESPACE --context \"$CONTEXT\" --verbose |& less -R" diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml index ed877e922b1..34aae13b846 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/prometheusrule.yaml @@ -24,7 +24,7 @@ spec: for: 1m labels: severity: warning - - alert: PGDatabase + - alert: PGDatabaseXidAge annotations: description: Over 150,000,000 transactions from frozen xid on pod {{ $labels.pod }} summary: Number of transactions from the frozen XID to the current one diff --git a/product_docs/docs/postgres_for_kubernetes/1/security.mdx b/product_docs/docs/postgres_for_kubernetes/1/security.mdx index 9338ef61f1f..d37cf838d3e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/security.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/security.mdx @@ -100,17 +100,11 @@ the cluster (PostgreSQL included). ### Role Based Access Control (RBAC) -The operator interacts with the Kubernetes API server using a dedicated service -account named `postgresql-operator-manager`. This service account is typically installed in -the operator namespace, commonly `postgresql-operator-system`. However, the namespace may vary -based on the deployment method (see the subsection below). - -In the same namespace, there is a binding between the `postgresql-operator-manager` service -account and a role. The specific name and type of this role (either `Role` or -`ClusterRole`) also depend on the deployment method. This role defines the -necessary permissions required by the operator to function correctly. To learn -more about these roles, you can use the `kubectl describe clusterrole` or -`kubectl describe role` commands, depending on the deployment method. +The operator interacts with the Kubernetes API server with a dedicated service +account called `postgresql-operator-manager`. In Kubernetes this is installed +by default in the `postgresql-operator-system` namespace, with a cluster role +binding between this service account and the `postgresql-operator-manager` +cluster role which defines the set of rules/resources/verbs granted to the operator. For OpenShift specificities on this matter, please consult the ["Red Hat OpenShift" section](openshift.md#predefined-rbac-objects), in particular ["Pre-defined RBAC objects" section](openshift.md#predefined-rbac-objects). @@ -124,7 +118,7 @@ For OpenShift specificities on this matter, please consult the Below we provide some examples and, most importantly, the reasons why EDB Postgres for Kubernetes requires full or partial management of standard Kubernetes -namespaced or non-namespaced resources. +namespaced resources. `configmaps` : The operator needs to create and manage default config maps for @@ -177,46 +171,14 @@ namespaced or non-namespaced resources. validate them before starting the restore process. `nodes` -: The operator needs to get the labels for Affinity and AntiAffinity so it can - decide in which nodes a pod can be scheduled. This is useful, for example, to - prevent the replicas from being scheduled in the same node - especially - important if nodes are in different availability zones. This - permission is also used to determine whether a node is scheduled, preventing - the creation of pods on unscheduled nodes, or triggering a switchover if - the primary lives in an unscheduled node. - -#### Deployments and `ClusterRole` Resources - -As mentioned above, each deployment method may have variations in the namespace -location of the service account, as well as the names and types of role -bindings and respective roles. - -##### Via Kubernetes Manifest - -When installing EDB Postgres for Kubernetes using the Kubernetes manifest, permissions are -set to `ClusterRoleBinding` by default. You can inspect the permissions -required by the operator by running: - -```sh -kubectl describe clusterrole postgresql-operator-manager -``` - -##### Via OLM - -From a security perspective, the Operator Lifecycle Manager (OLM) provides a -more flexible deployment method. It allows you to configure the operator to -watch either all namespaces or specific namespaces, enabling more granular -permission management. - -!!!Info - OLM allows you to deploy the operator in its own namespace and configure it - to watch specific namespaces used for EDB Postgres for Kubernetes clusters. This setup helps - to contain permissions and restrict access more effectively. - -#### Why Are ClusterRole Permissions Needed? - -The operator currently requires `ClusterRole` permissions just to read `nodes` -objects. All other permissions can be namespace-scoped (i.e., `Role`) or +: The operator needs to get the labels for Affinity and AntiAffinity, so it can + decide in which nodes a pod can be scheduled preventing the replicas to be + in the same node, specially if nodes are in different availability zones. This + permission is also used to determine if a node is schedule or not, avoiding + the creation of pods that cannot be created at all. + +The operator currently requires `ClusterRole` permissions to read `nodes` and +`ClusterImageCatalog` objects. All other permissions can be namespace-scoped (i.e., `Role`) or cluster-wide (i.e., `ClusterRole`). Even with these permissions, if someone gains access to the `ServiceAccount`, diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx index d443cb6e241..c92dddbbfaa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx @@ -344,7 +344,14 @@ kubectl logs -n - | \ jq 'select(.logger=="postgres") | .record.message' ``` -The following example also adds the timestamp in a user-friendly format: +The following example also adds the timestamp: + +```shell +kubectl logs -n - | \ + jq -r 'select(.logger=="postgres") | [.ts, .record.message] | @csv' +``` + +If the timestamp is displayed in Unix Epoch time, you can convert it to a user-friendly format: ```shell kubectl logs -n - | \ diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx index 26385fee0ef..b75441ee1b2 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx @@ -16,7 +16,7 @@ the ["Backup on object stores" section](backup_barmanobjectstore.md) to set up the WAL archive. !!! Info - Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration) + Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-barmanobjectstoreconfiguration) in the API reference for a full list of options. If required, you can choose to compress WAL files as soon as they From 5a810b6afbcda466e7e9af3f6c18befa4f77f282 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Thu, 1 Aug 2024 14:36:15 +0000 Subject: [PATCH 2/3] Fix broken links and other regressions --- .../1/before_you_start.mdx | 2 +- .../1/cluster_conf.mdx | 2 +- .../1/failure_modes.mdx | 6 +- .../docs/postgres_for_kubernetes/1/index.mdx | 2 + .../1/kubectl-plugin.mdx | 547 ++++++++++++++++-- .../postgres_for_kubernetes/1/logging.mdx | 4 +- .../1/replica_cluster.mdx | 3 - .../1/wal_archiving.mdx | 2 +- .../cnp/rewrite-mdextra-anchors.mjs | 2 - .../processors/cnp/update-links.mjs | 4 +- 10 files changed, 496 insertions(+), 78 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx index 1bdd6a97161..ce8994e945e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/before_you_start.mdx @@ -76,7 +76,7 @@ specific to Kubernetes and PostgreSQL. : `kubectl` is the command-line tool used to manage a Kubernetes cluster. EDB Postgres for Kubernetes requires a Kubernetes version supported by the community. Please refer to the -["Supported releases"](/resources/platform-compatibility#pgk8s) page for details. +["Supported releases"](https://www.enterprisedb.com/resources/platform-compatibility#pgk8s) page for details. ## PostgreSQL terminology diff --git a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx index 0a515fb9465..8b550eb893d 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx @@ -50,7 +50,7 @@ EDB Postgres for Kubernetes relies on [ephemeral volumes](https://kubernetes.io/ for part of the internal activities. Ephemeral volumes exist for the sole duration of a pod's life, without persisting across pod restarts. -# Volume Claim Template for Temporary Storage +### Volume Claim Template for Temporary Storage The operator uses by default an `emptyDir` volume, which can be customized by using the `.spec.ephemeralVolumesSizeLimit field`. This can be overridden by specifying a volume claim template in the `.spec.ephemeralVolumeSource` field. diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx index 24771b9e34e..a1aab1641cf 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx @@ -8,7 +8,7 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime. !!! Important In case the failure scenario you are experiencing is not covered by this - section, please immediately seek for [professional support](https://cloudnative-pg.io/support/). + section, please immediately contact EDB for support and assistance. !!! Seealso "Postgres instance manager" Please refer to the ["Postgres instance manager" section](instance_manager.md) @@ -175,8 +175,8 @@ In the case of undocumented failure, it might be necessary to intervene to solve the problem manually. !!! Important - In such cases, please do not perform any manual operation without - [professional support](https://cloudnative-pg.io/support/). + In such cases, please do not perform any manual operation without the + support and assistance of EDB engineering team. From version 1.11.0 of the operator, you can use the `k8s.enterprisedb.io/reconciliationLoop` annotation to temporarily disable the diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx index 0d4067c1a94..9662afbea31 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx @@ -79,6 +79,8 @@ and OpenShift. It is designed, developed, and supported by EDB and covers the full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. +EDB Postgres for Kubernetes was made generally available on February 4, 2021. Earlier versions were made available to selected customers prior to the GA release. + !!! Note The operator has been renamed from Cloud Native PostgreSQL. Existing users diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx index e1982245b0a..1cecf1cf86d 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx @@ -34,52 +34,67 @@ them in your systems. #### Debian packages -For example, let's install the 1.18.1 release of the plugin, for an Intel based +For example, let's install the 1.22.2 release of the plugin, for an Intel based 64 bit server. First, we download the right `.deb` file. ```sh -$ wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.deb +wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.deb ``` Then, install from the local file using `dpkg`: ```sh -$ dpkg -i kubectl-cnp_1.18.1_linux_x86_64.deb +dpkg -i kubectl-cnp_1.22.2_linux_x86_64.deb +__OUTPUT__ (Reading database ... 16102 files and directories currently installed.) -Preparing to unpack kubectl-cnp_1.18.1_linux_x86_64.deb ... -Unpacking cnp (1.18.1) over (1.18.1) ... -Setting up cnp (1.18.1) ... +Preparing to unpack kubectl-cnp_1.22.2_linux_x86_64.deb ... +Unpacking cnp (1.22.2) over (1.22.2) ... +Setting up cnp (1.22.2) ... ``` #### RPM packages -As in the example for `.deb` packages, let's install the 1.18.1 release for an +As in the example for `.deb` packages, let's install the 1.22.2 release for an Intel 64 bit machine. Note the `--output` flag to provide a file name. -```sh -curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.rpm --output cnp-plugin.rpm +``` sh +curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.rpm \ + --output kube-plugin.rpm ``` Then install with `yum`, and you're ready to use: ```sh -$ yum --disablerepo=* localinstall cnp-plugin.rpm -yum --disablerepo=* localinstall cnp-plugin.rpm -Failed to set locale, defaulting to C.UTF-8 +yum --disablerepo=* localinstall kube-plugin.rpm +__OUTPUT__ Dependencies resolved. -==================================================================================================== - Package Architecture Version Repository Size -==================================================================================================== +======================================================================================================================== + Package Architecture Version Repository Size +======================================================================================================================== Installing: - cnpg x86_64 1.18.1-1 @commandline 14 M + kubectl-cnp x86_64 1.22.2-1 @commandline 17 M Transaction Summary -==================================================================================================== +======================================================================================================================== Install 1 Package -Total size: 14 M -Installed size: 43 M +Total size: 17 M +Installed size: 62 M Is this ok [y/N]: y +Downloading Packages: +Running transaction check +Transaction check succeeded. +Running transaction test +Transaction test succeeded. +Running transaction + Preparing : 1/1 + Installing : kubectl-cnp-1.22.2-1.x86_64 1/1 + Verifying : kubectl-cnp-1.22.2-1.x86_64 1/1 + +Installed: + kubectl-cnp-1.22.2-1.x86_64 + +Complete! ``` ### Supported Architectures @@ -102,6 +117,29 @@ operating system and architectures: - arm 5/6/7 - arm64 +### Configuring auto-completion + +To configure [auto-completion](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/) for the plugin, a helper shell script needs to be +installed into your current PATH. Assuming the latter contains `/usr/local/bin`, +this can be done with the following commands: + +```shell +cat > kubectl_complete-cnp <..` format (e.g. `1.22.2`). The default empty value installs the version of the operator that matches the version of the plugin. - `--watch-namespace`: comma separated string containing the namespaces to watch (by default all namespaces) @@ -140,7 +175,7 @@ will install the operator, is as follows: ```shell kubectl cnp install generate \ -n king \ - --version 1.17 \ + --version 1.22.2 \ --replicas 3 \ --watch-namespace "albert, bb, freddie" \ > operator.yaml @@ -149,9 +184,9 @@ kubectl cnp install generate \ The flags in the above command have the following meaning: - `-n king` install the CNP operator into the `king` namespace -- `--version 1.17` install the latest patch version for minor version 1.17 +- `--version 1.22.2` install operator version 1.22.2 - `--replicas 3` install the operator with 3 replicas -- `--watch-namespaces "albert, bb, freddie"` have the operator watch for +- `--watch-namespace "albert, bb, freddie"` have the operator watch for changes in the `albert`, `bb` and `freddie` namespaces only ### Status @@ -187,7 +222,7 @@ Cluster in healthy state Name: sandbox Namespace: default System ID: 7039966298120953877 -PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2 Primary instance: sandbox-2 Instances: 3 Ready instances: 3 @@ -232,7 +267,7 @@ Cluster in healthy state Name: sandbox Namespace: default System ID: 7039966298120953877 -PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2 Primary instance: sandbox-2 Instances: 3 Ready instances: 3 @@ -722,6 +757,89 @@ items: "apiVersion": "postgresql.k8s.enterprisedb.io/v1", ``` +### Logs + +The `kubectl cnp logs` command allows to follow the logs of a collection +of pods related to EDB Postgres for Kubernetes in a single go. + +It has at the moment one available sub-command: `cluster`. + +#### Cluster logs + +The `cluster` sub-command gathers all the pod logs for a cluster in a single +stream or file. +This means that you can get all the pod logs in a single terminal window, with a +single invocation of the command. + +As in all the cnp plugin sub-commands, you can get instructions and help with +the `-h` flag: + +`kubectl cnp logs cluster -h` + +The `logs` command will display logs in JSON-lines format, unless the +`--timestamps` flag is used, in which case, a human readable timestamp will be +prepended to each line. In this case, lines will no longer be valid JSON, +and tools such as `jq` may not work as desired. + +If the `logs cluster` sub-command is given the `-f` flag (aka `--follow`), it +will follow the cluster pod logs, and will also watch for any new pods created +in the cluster after the command has been invoked. +Any new pods found, including pods that have been restarted or re-created, +will also have their pods followed. +The logs will be displayed in the terminal's standard-out. +This command will only exit when the cluster has no more pods left, or when it +is interrupted by the user. + +If `logs` is called without the `-f` option, it will read the logs from all +cluster pods until the time of invocation and display them in the terminal's +standard-out, then exit. +The `-o` or `--output` flag can be provided, to specify the name +of the file where the logs should be saved, instead of displaying over +standard-out. +The `--tail` flag can be used to specify how many log lines will be retrieved +from each pod in the cluster. By default, the `logs cluster` sub-command will +display all the logs from each pod in the cluster. If combined with the "follow" +flag `-f`, the number of logs specified by `--tail` will be retrieved until the +current time, and and from then the new logs will be followed. + +NOTE: unlike other `cnp` plugin commands, the `-f` is used to denote "follow" +rather than specify a file. This keeps with the convention of `kubectl logs`, +which takes `-f` to mean the logs should be followed. + +Usage: + +```shell +kubectl cnp logs cluster [flags] +``` + +Using the `-f` option to follow: + +```shell +kubectl cnp report cluster cluster-example -f +``` + +Using `--tail` option to display 3 lines from each pod and the `-f` option +to follow: + +```shell +kubectl cnp report cluster cluster-example -f --tail 3 +``` + +``` json +{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"cluster-example-3"} +{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] HINT: Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"cluster-example-3"} +… +… +``` + +With the `-o` option omitted, and with `--output` specified: + +``` sh +kubectl cnp logs cluster cluster-example --output my-cluster.log + +Successfully written logs to "my-cluster.log" +``` + ### Destroy The `kubectl cnp destroy` command helps remove an instance and all the @@ -826,11 +944,16 @@ kubectl cnp fio -n Refer to the [Benchmarking fio section](benchmarking.md#fio) for more details. -### Requesting a new base backup +### Requesting a new physical backup The `kubectl cnp backup` command requests a new physical base backup for an existing Postgres cluster by creating a new `Backup` resource. +!!! Info + From release 1.21, the `backup` command accepts a new flag, `-m` + to specify the backup method. + To request a backup using volume snapshots, set `-m volumeSnapshot` + The following example requests an on-demand backup for a given cluster: ```shell @@ -844,10 +967,17 @@ kubectl cnp backup cluster-example backup/cluster-example-20230121002300 created ``` -By default, new created backup will use the backup target policy defined -in cluster to choose which instance to run on. You can also use `--backup-target` -option to override this policy. please refer to [Backup and Recovery](backup_recovery.md) -for more information about backup target. +By default, a newly created backup will use the backup target policy defined +in the cluster to choose which instance to run on. +However, you can override this policy with the `--backup-target` option. + +In the case of volume snapshot backups, you can also use the `--online` option +to request an online/hot backup or an offline/cold one: additionally, you can +also tune online backups by explicitly setting the `--immediate-checkpoint` and +`--wait-for-archive` options. + +The ["Backup" section](./backup.md) contains more information about +the configuration settings. ### Launching psql @@ -862,7 +992,7 @@ it from the actual pod. This means that you will be using the `postgres` user. ```shell kubectl cnp psql cluster-example -psql (15.3) +psql (16.2 (Debian 16.2-1.pgdg110+1)) Type "help" for help. postgres=# @@ -873,7 +1003,7 @@ select to work against a replica by using the `--replica` option: ```shell kubectl cnp psql --replica cluster-example -psql (15.3) +psql (16.2 (Debian 16.2-1.pgdg110+1)) Type "help" for help. @@ -901,44 +1031,335 @@ kubectl cnp psql cluster-example -- -U postgres ### Snapshotting a Postgres cluster -The `kubectl cnp snapshot` creates consistent snapshots of a Postgres -`Cluster` by: +!!! Warning + The `kubectl cnp snapshot` command has been removed. + Please use the [`backup` command](#requesting-a-new-physical-backup) to request + backups using volume snapshots. -1. choosing a replica Pod to work on -2. fencing the replica -3. taking the snapshot -4. unfencing the replica +### Using pgAdmin4 for evaluation/demonstration purposes only -!!! Warning - A cluster already having a fenced instance cannot be snapshotted. +[pgAdmin](https://www.pgadmin.org/) stands as the most popular and feature-rich +open-source administration and development platform for PostgreSQL. +For more information on the project, please refer to the official +[documentation](https://www.pgadmin.org/docs/). -At the moment, this command can be used only for clusters having at least one -replica: that replica will be shut down by the fencing procedure to ensure the -snapshot to be consistent (cold backup). As the development of -declarative support for Kubernetes' `VolumeSnapshot` API continues, -this limitation will be removed, allowing you to take online backups -as business continuity requires. +Given that the pgAdmin Development Team maintains official Docker container +images, you can install pgAdmin in your environment as a standard +Kubernetes deployment. !!! Important - Even if the procedure will shut down a replica, the primary - Pod will not be involved. + Deployment of pgAdmin in Kubernetes production environments is beyond the + scope of this document and, more broadly, of the EDB Postgres for Kubernetes project. -The `kubectl cnp snapshot` command requires the cluster name: +However, **for the purposes of demonstration and evaluation**, EDB Postgres for Kubernetes +offers a suitable solution. The `cnp` plugin implements the `pgadmin4` +command, providing a straightforward method to connect to a given database +`Cluster` and navigate its content in a local environment such as `kind`. -```shell -kubectl cnp snapshot cluster-example +For example, you can install a demo deployment of pgAdmin4 for the +`cluster-example` cluster as follows: -waiting for cluster-example-3 to be fenced -waiting for VolumeSnapshot cluster-example-3-1682539624 to be ready to use -unfencing pod cluster-example-3 +```sh +kubectl cnp pgadmin4 cluster-example ``` -The `VolumeSnapshot` resource will be created with an empty -`VolumeSnapshotClass` reference. That resource is intended by be used by the -`VolumeSnapshotClass` configured as default. +This command will produce: -A specific `VolumeSnapshotClass` can be requested via the `-c` option: +```output +ConfigMap/cluster-example-pgadmin4 created +Deployment/cluster-example-pgadmin4 created +Service/cluster-example-pgadmin4 created +Secret/cluster-example-pgadmin4 created -```shell -kubectl cnp snapshot cluster-example -c longhorn +[...] +``` + +After deploying pgAdmin, forward the port using kubectl and connect +through your browser by following the on-screen instructions. + +![Screenshot of desktop installation of pgAdmin](images/pgadmin4.png) + +As usual, you can use the `--dry-run` option to generate the YAML file: + +```sh +kubectl cnp pgadmin4 --dry-run cluster-example +``` + +pgAdmin4 can be installed in either desktop or server mode, with the default +being server. + +In `server` mode, authentication is required using a randomly generated password, +and users must manually specify the database to connect to. + +On the other hand, `desktop` mode initiates a pgAdmin web interface without +requiring authentication. It automatically connects to the `app` database as the +`app` user, making it ideal for quick demos, such as on a local deployment using +`kind`: + +```sh +kubectl cnp pgadmin4 --mode desktop cluster-example ``` + +After concluding your demo, ensure the termination of the pgAdmin deployment by +executing: + +```sh +kubectl cnp pgadmin4 --dry-run cluster-example | kubectl delete -f - +``` + +!!! Warning + Never deploy pgAdmin in production using the plugin. + +### Logical Replication Publications + +The `cnp publication` command group is designed to streamline the creation and +removal of [PostgreSQL logical replication publications](https://www.postgresql.org/docs/current/logical-replication-publication.html). +Be aware that these commands are primarily intended for assisting in the +creation of logical replication publications, particularly on remote PostgreSQL +databases. + +!!! Warning + It is crucial to have a solid understanding of both the capabilities and + limitations of PostgreSQL's native logical replication system before using + these commands. + In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). + +#### Creating a new publication + +To create a logical replication publication, use the `cnp publication create` +command. The basic structure of this command is as follows: + +```sh +kubectl cnp publication create \ + --publication \ + [--external-cluster ] + [options] +``` + +There are two primary use cases: + +- With `--external-cluster`: Use this option to create a publication on an + external cluster (i.e. defined in the `externalClusters` stanza). The commands + will be issued from the ``, but the publication will be for the + data in ``. + +- Without `--external-cluster`: Use this option to create a publication in the + `` PostgreSQL `Cluster` (by default, the `app` database). + +!!! Warning + When connecting to an external cluster, ensure that the specified user has + sufficient permissions to execute the `CREATE PUBLICATION` command. + +You have several options, similar to the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html) +command, to define the group of tables to replicate. Notable options include: + +- If you specify the `--all-tables` option, you create a publication `FOR ALL TABLES`. +- Alternatively, you can specify multiple occurrences of: + - `--table`: Add a specific table (with an expression) to the publication. + - `--schema`: Include all tables in the specified database schema (available + from PostgreSQL 15). + +The `--dry-run` option enables you to preview the SQL commands that the plugin +will execute. + +For additional information and detailed instructions, type the following +command: + +```sh +kubectl cnp publication create --help +``` + +##### Example + +Given a `source-cluster` and a `destination-cluster`, we would like to create a +publication for the data on `source-cluster`. +The `destination-cluster` has an entry in the `externalClusters` stanza pointing +to `source-cluster`. + +We can run: + +``` sh +kubectl cnp publication create destination-cluster \ + --external-cluster=source-cluster --all-tables +``` + +which will create a publication for all tables on `source-cluster`, running +the SQL commands on the `destination-cluster`. + +Or instead, we can run: + +``` sh +kubectl cnp publication create source-cluster \ + --publication=app --all-tables +``` + +which will create a publication named `app` for all the tables in the +`source-cluster`, running the SQL commands on the source cluster. + +!!! Info + There are two sample files that have been provided for illustration and inspiration: + [logical-source](../samples/cluster-example-logical-source.yaml) and + [logical-destination](../samples/cluster-example-logical-destination.yaml). + +#### Dropping a publication + +The `cnp publication drop` command seamlessly complements the `create` command +by offering similar key options, including the publication name, cluster name, +and an optional external cluster. You can drop a `PUBLICATION` with the +following command structure: + +```sh +kubectl cnp publication drop \ + --publication \ + [--external-cluster ] + [options] +``` + +To access further details and precise instructions, use the following command: + +```sh +kubectl cnp publication drop --help +``` + +### Logical Replication Subscriptions + +The `cnp subscription` command group is a dedicated set of commands designed +to simplify the creation and removal of +[PostgreSQL logical replication subscriptions](https://www.postgresql.org/docs/current/logical-replication-subscription.html). +These commands are specifically crafted to aid in the establishment of logical +replication subscriptions, especially when dealing with remote PostgreSQL +databases. + +!!! Warning + Before using these commands, it is essential to have a comprehensive + understanding of both the capabilities and limitations of PostgreSQL's + native logical replication system. + In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). + +In addition to subscription management, we provide a helpful command for +synchronizing all sequences from the source cluster. While its applicability +may vary, this command can be particularly useful in scenarios involving major +upgrades or data import from remote servers. + +#### Creating a new subscription + +To create a logical replication subscription, use the `cnp subscription create` +command. The basic structure of this command is as follows: + +```sh +kubectl cnp subscription create \ + --subscription \ + --publication \ + --external-cluster \ + [options] +``` + +This command configures a subscription directed towards the specified +publication in the designated external cluster, as defined in the +`externalClusters` stanza of the ``. + +For additional information and detailed instructions, type the following +command: + +```sh +kubectl cnp subscription create --help +``` + +##### Example + +As in the section on publications, we have a `source-cluster` and a +`destination-cluster`, and we have already created a publication called +`app`. + +The following command: + +``` sh +kubectl cnp subscription create destination-cluster \ + --external-cluster=source-cluster \ + --publication=app --subscription=app +``` + +will create a subscription for `app` on the destination cluster. + +!!! Warning + Prioritize testing subscriptions in a non-production environment to ensure + their effectiveness and identify any potential issues before implementing them + in a production setting. + +!!! Info + There are two sample files that have been provided for illustration and inspiration: + [logical-source](../samples/cluster-example-logical-source.yaml) and + [logical-destination](../samples/cluster-example-logical-destination.yaml). + +#### Dropping a subscription + +The `cnp subscription drop` command seamlessly complements the `create` command. +You can drop a `SUBSCRIPTION` with the following command structure: + +```sh +kubectl cnp subcription drop \ + --subscription \ + [options] +``` + +To access further details and precise instructions, use the following command: + +```sh +kubectl cnp subscription drop --help +``` + +#### Synchronizing sequences + +One notable constraint of PostgreSQL logical replication, implemented through +publications and subscriptions, is the lack of sequence synchronization. This +becomes particularly relevant when utilizing logical replication for live +database migration, especially to a higher version of PostgreSQL. A crucial +step in this process involves updating sequences before transitioning +applications to the new database (*cutover*). + +To address this limitation, the `cnp subscription sync-sequences` command +offers a solution. This command establishes a connection with the source +database, retrieves all relevant sequences, and subsequently updates local +sequences with matching identities (based on database schema and sequence +name). + +You can use the command as shown below: + +```sh +kubectl cnp subscription sync-sequences \ + --subscription \ + +``` + +For comprehensive details and specific instructions, utilize the following +command: + +```sh +kubectl cnp subscription sync-sequences --help +``` + +##### Example + +As in the previous sections for publication and subscription, we have +a `source-cluster` and a `destination-cluster`. The publication and the +subscription, both called `app`, are already present. + +The following command will synchronize the sequences involved in the +`app` subscription, from the source cluster into the destination cluster. + +``` sh +kubectl cnp subscription sync-sequences destination-cluster \ + --subscription=app +``` + +!!! Warning + Prioritize testing subscriptions in a non-production environment to + guarantee their effectiveness and detect any potential issues before deploying + them in a production setting. + +## Integration with K9s + +The `cnp` plugin can be easily integrated in [K9s](https://k9scli.io/), a +popular terminal-based UI to interact with Kubernetes clusters. + +See [`k9s/plugins.yml`](../samples/k9s/plugins.yml) for details. diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx index 6b72fc1e955..6689e368503 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx @@ -181,7 +181,7 @@ for more details about each field in a record. ## EDB Audit logs Clusters that are running on EDB Postgres Advanced Server (EPAS) -can enable [EDB Audit](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) as follows: +can enable [EDB Audit](/epas/latest/epas_security_guide/05_edb_audit_logging/) as follows: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -264,7 +264,7 @@ See the example below: } ``` -See EDB [Audit file](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) +See EDB [Audit file](/epas/latest/epas_security_guide/05_edb_audit_logging/) for more details about the records' fields. ## Other logs diff --git a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx index e8c5b753880..c55a355ced3 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx @@ -121,7 +121,6 @@ user are set to the default, `app`. If the PostgreSQL cluster being restored uses different names, you must specify them as documented in [Configure the application database](bootstrap.md#configure-the-application-database). You should also consider copying over the application user secret from the original cluster and keep it synchronized with the source. -See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. In the `externalClusters` section, remember to use the right namespace for the host in the `connectionParameters` sub-section. @@ -174,7 +173,6 @@ user are set to the default, `app`. If the PostgreSQL cluster being restored uses different names, you must specify them as documented in [Configure the application database](recovery.md#configure-the-application-database). You should also consider copying over the application user secret from the original cluster and keep it synchronized with the source. -See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. In the `externalClusters` section, take care to use the right namespace in the `endpointURL` and the `connectionParameters.host`. @@ -225,7 +223,6 @@ uses different names, you must specify them as documented in [Configure the application database](recovery.md#configure-the-application-database). You should also consider copying over the application user secret from the original cluster and keep it synchronized with the source. -See ["About PostgreSQL Roles"](#about-postgresql-roles) for more details. ## Demoting a Primary to a Replica Cluster diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx index b75441ee1b2..26385fee0ef 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx @@ -16,7 +16,7 @@ the ["Backup on object stores" section](backup_barmanobjectstore.md) to set up the WAL archive. !!! Info - Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-barmanobjectstoreconfiguration) + Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration) in the API reference for a full list of options. If required, you can choose to compress WAL files as soon as they diff --git a/scripts/fileProcessor/processors/cnp/rewrite-mdextra-anchors.mjs b/scripts/fileProcessor/processors/cnp/rewrite-mdextra-anchors.mjs index c88a709a18c..6e123319b16 100644 --- a/scripts/fileProcessor/processors/cnp/rewrite-mdextra-anchors.mjs +++ b/scripts/fileProcessor/processors/cnp/rewrite-mdextra-anchors.mjs @@ -45,8 +45,6 @@ export const process = async (filename, content) => { function headingRewriter() { const anchorRE = /{#([^}]+)}/; return (tree) => { - // link rewriter: - // - update links to supported_releases.md to point to /resources/platform-compatibility#pgk8s visit(tree, "heading", (node, index, parent) => { let text = mdast2string(node); let anchor = text.match(anchorRE); diff --git a/scripts/fileProcessor/processors/cnp/update-links.mjs b/scripts/fileProcessor/processors/cnp/update-links.mjs index a78a15c5c6f..69a48e41728 100644 --- a/scripts/fileProcessor/processors/cnp/update-links.mjs +++ b/scripts/fileProcessor/processors/cnp/update-links.mjs @@ -40,7 +40,7 @@ function linkRewriter() { return (tree) => { let fileMetadata = {}; // link rewriter: - // - update links to supported_releases.md to point to /resources/platform-compatibility#pgk8s + // - update links to supported_releases.md to point to https://www.enterprisedb.com/resources/platform-compatibility#pgk8s // - update links to release_notes to rel_notes // - update links to appendixes/* to /* // - update links *from* appendixes/* to /* @@ -58,7 +58,7 @@ function linkRewriter() { if (node.url.startsWith("appendixes")) node.url = node.url.replace("appendixes/", ""); else if (node.url === "supported_releases.md") - node.url = "/resources/platform-compatibility#pgk8s"; + node.url = "https://www.enterprisedb.com/resources/platform-compatibility#pgk8s"; else if (node.url === "release_notes.md") node.url = "rel_notes"; else if (node.url === "release_notes.md") From 1861088a76d8b8a35f5b172e42b5cee0b3aed6d4 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Thu, 1 Aug 2024 14:39:11 +0000 Subject: [PATCH 3/3] Add release notes (all upstream) --- .../1/rel_notes/1_22_5_rel_notes.mdx | 12 ++++++++++++ .../1/rel_notes/1_23_3_rel_notes.mdx | 12 ++++++++++++ .../postgres_for_kubernetes/1/rel_notes/index.mdx | 4 ++++ 3 files changed, 28 insertions(+) create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx new file mode 100644 index 00000000000..6aa1e2ded81 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_5_rel_notes.mdx @@ -0,0 +1,12 @@ +--- +title: "EDB Postgres for Kubernetes 1.22.5 release notes" +navTitle: "Version 1.22.5" +--- + +Released: 01 Aug 2024 + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.22.5. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx new file mode 100644 index 00000000000..7bb3542b67e --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_3_rel_notes.mdx @@ -0,0 +1,12 @@ +--- +title: "EDB Postgres for Kubernetes 1.23.3 release notes" +navTitle: "Version 1.23.3" +--- + +Released: 01 Aug 2024 + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.23.3. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index daa6a1d7a21..61da24ef703 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -4,9 +4,11 @@ navTitle: "Release notes" redirects: - ../release_notes navigation: +- 1_23_3_rel_notes - 1_23_2_rel_notes - 1_23_1_rel_notes - 1_23_0_rel_notes +- 1_22_5_rel_notes - 1_22_4_rel_notes - 1_22_3_rel_notes - 1_22_2_rel_notes @@ -100,9 +102,11 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | +| [1.23.3](1_23_3_rel_notes) | 01 Aug 2024 | Upstream [1.23.3](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/) | | [1.23.2](1_23_2_rel_notes) | 13 Jun 2024 | Upstream [1.23.2](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/) | | [1.23.1](1_23_1_rel_notes) | 29 Apr 2024 | Upstream [1.23.1](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/) | | [1.23.0](1_23_0_rel_notes) | 24 Apr 2024 | Upstream [1.23.0](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/) | +| [1.22.5](1_22_5_rel_notes) | 01 Aug 2024 | Upstream [1.22.5](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) | | [1.22.4](1_22_4_rel_notes) | 13 Jun 2024 | Upstream [1.22.4](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) | | [1.22.3](1_22_3_rel_notes) | 24 Apr 2024 | Upstream [1.22.3](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) | | [1.22.2](1_22_2_rel_notes) | 22 Mar 2024 | Upstream [1.22.2](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) |