Skip to content

Commit

Permalink
PG4K import for 1.24.next
Browse files Browse the repository at this point in the history
  • Loading branch information
josh-heyer committed Nov 20, 2024
1 parent 4713c57 commit 15e8427
Show file tree
Hide file tree
Showing 25 changed files with 6,309 additions and 553 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -99,9 +99,9 @@ algorithms via `barman-cloud-backup` (for backups) and
- snappy

The compression settings for backups and WALs are independent. See the
[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration) and
[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration) and
[WALBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration) sections in
the API reference.
the barman-cloud API reference.

It is important to note that archival time, restore time, and size change
between the algorithms, so the compression algorithm should be chosen according
Expand Down
78 changes: 48 additions & 30 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -430,44 +430,62 @@ to the ["Recovery" section](recovery.md).

### Bootstrap from a live cluster (`pg_basebackup`)

The `pg_basebackup` bootstrap mode lets you create a new cluster (*target*) as
an exact physical copy of an existing and **binary compatible** PostgreSQL
instance (*source*), through a valid *streaming replication* connection.
The source instance can be either a primary or a standby PostgreSQL server.
The `pg_basebackup` bootstrap mode allows you to create a new cluster
(*target*) as an exact physical copy of an existing and **binary-compatible**
PostgreSQL instance (*source*) managed by EDB Postgres for Kubernetes, using a valid
*streaming replication* connection. The source instance can either be a primary
or a standby PostgreSQL server. It’s crucial to thoroughly review the
requirements section below, as the pros and cons of PostgreSQL physical
replication fully apply.

The primary use cases for this method include:

- Reporting and business intelligence clusters that need to be regenerated
periodically (daily, weekly)
- Test databases containing live data that require periodic regeneration
(daily, weekly, monthly) and anonymization
- Rapid spin-up of a standalone replica cluster
- Physical migrations of EDB Postgres for Kubernetes clusters to different namespaces or
Kubernetes clusters

The primary use case for this method is represented by **migrations** to EDB Postgres for Kubernetes,
either from outside Kubernetes or within Kubernetes (e.g., from another operator).
!!! Important

Avoid using this method, based on physical replication, to migrate an
existing PostgreSQL cluster outside of Kubernetes into EDB Postgres for Kubernetes unless you
are completely certain that all requirements are met and the operation has been
thoroughly tested. The EDB Postgres for Kubernetes community does not endorse this approach
for such use cases and recommends using logical import instead. It is
exceedingly rare that all requirements for physical replication are met in a
way that seamlessly works with EDB Postgres for Kubernetes.

!!! Warning
The current implementation creates a *snapshot* of the origin PostgreSQL
instance when the cloning process terminates and immediately starts
the created cluster. See ["Current limitations"](#current-limitations) below for details.

Similar to the case of the `recovery` bootstrap method, once the clone operation
completes, the operator will take ownership of the target cluster, starting from
the first instance. This includes overriding some configuration parameters, as
required by EDB Postgres for Kubernetes, resetting the superuser password, creating
the `streaming_replica` user, managing the replicas, and so on. The resulting
cluster will be completely independent of the source instance.
In its current implementation, this method clones the source PostgreSQL
instance, thereby creating a *snapshot*. Once the cloning process has finished,
the new cluster is immediately started.
Refer to ["Current limitations"](#current-limitations) for more details.

Similar to the `recovery` bootstrap method, once the cloning operation is
complete, the operator takes full ownership of the target cluster, starting
from the first instance. This includes overriding certain configuration
parameters as required by EDB Postgres for Kubernetes, resetting the superuser password,
creating the `streaming_replica` user, managing replicas, and more. The
resulting cluster operates independently from the source instance.

!!! Important
Configuring the network between the target instance and the source instance
goes beyond the scope of EDB Postgres for Kubernetes documentation, as it depends
on the actual context and environment.

The streaming replication client on the target instance, which will be
transparently managed by `pg_basebackup`, can authenticate itself on the source
instance in any of the following ways:
Configuring the network connection between the target and source instances
lies outside the scope of EDB Postgres for Kubernetes documentation, as it depends heavily on
the specific context and environment.

The streaming replication client on the target instance, managed transparently
by `pg_basebackup`, can authenticate on the source instance using one of the
following methods:

1. via [username/password](#usernamepassword-authentication)
2. via [TLS client certificate](#tls-certificate-authentication)
1. [Username/password](#usernamepassword-authentication)
2. [TLS client certificate](#tls-certificate-authentication)

The latter is the recommended one if you connect to a source managed
by EDB Postgres for Kubernetes or configured for TLS authentication.
The first option is, however, the most common form of authentication to a
PostgreSQL server in general, and might be the easiest way if the source
instance is on a traditional environment outside Kubernetes.
Both cases are explained below.
Both authentication methods are detailed below.

#### Requirements

Expand Down Expand Up @@ -691,7 +709,7 @@ instance using a second connection (see the `--wal-method=stream` option for
Once the backup is completed, the new instance will be started on a new timeline
and diverge from the source.
For this reason, it is advised to stop all write operations to the source database
before migrating to the target database in Kubernetes.
before migrating to the target database.

!!! Important
Before you attempt a migration, you must test both the procedure
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
title: 'Declarative Database Management'
originalFilePath: 'src/declarative_database_management.md'
---

Declarative database management enables users to control the lifecycle of
databases via a new Custom Resource Definition (CRD) called `Database`.

A `Database` object is managed by the instance manager of the cluster's
primary instance. This feature is not supported in replica clusters,
as replica clusters lack a primary instance to manage the `Database` object.

### Example: Simple Database Declaration

Below is an example of a basic `Database` configuration:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Database
metadata:
name: db-one
spec:
name: one
owner: app
cluster:
name: cluster-example
```
Once the reconciliation cycle is completed successfully, the `Database`
status will show a `applied` field set to `true` and an empty `message` field.

### Database Deletion and Reclaim Policies

A finalizer named `k8s.enterprisedb.io/deleteDatabase` is automatically added
to each `Database` object to control its deletion process.

By default, the `databaseReclaimPolicy` is set to `retain`, which means
that if the `Database` object is deleted, the actual PostgreSQL database
is retained for manual management by an administrator.

Alternatively, if the `databaseReclaimPolicy` is set to `delete`,
the PostgreSQL database will be automatically deleted when the `Database`
object is removed.

### Example: Database with Delete Reclaim Policy

The following example illustrates a `Database` object with a `delete`
reclaim policy:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Database
metadata:
name: db-one-with-delete-reclaim-policy
spec:
databaseReclaimPolicy: delete
name: two
owner: app
cluster:
name: cluster-example
```

In this case, when the `Database` object is deleted, the corresponding PostgreSQL database will also be removed automatically.
1 change: 0 additions & 1 deletion product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,3 @@ Use your EDB account to evaluate Postgres for Kubernetes. If you don't have an a
By default, EDB Postgres for Kubernetes installs the latest available version of Community Postgresql.

PostgreSQL container images are available at [quay.io/enterprisedb/postgresql](https://quay.io/repository/enterprisedb/postgresql).

Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ originalFilePath: 'src/installation_upgrade.md'
### Obtaining an EDB subscription token

!!! Important

You must obtain an EDB subscription token to install EDB Postgres for Kubernetes. Without a token, you will not be able to access the EDB private software repositories.

Installing EDB Postgres for Kubernetes requires an EDB Repos 2.0 token to gain access to the EDB private software repositories.
Expand All @@ -27,8 +28,8 @@ Your account profile page displays the token to use next to **Repos 2.0 Token**

Your token entitles you to access one of two repositories: standard or enterprise.

* `standard` - Includes the operator and the EDB Postgres Extended operand images.
* `enterprise` - Includes the operator and the EDB Postgres Advanced and EDB Postgres Extended operand images.
- `standard` - Includes the operator and the EDB Postgres Extended operand images.
- `enterprise` - Includes the operator and the EDB Postgres Advanced and EDB Postgres Extended operand images.

Set the relevant value, determined by your subscription, as an environment variable `EDB_SUBSCRIPTION_PLAN`.

Expand All @@ -43,6 +44,7 @@ EDB_SUBSCRIPTION_TOKEN=<your-token>
```

!!! Warning

The token is sensitive information. Please ensure that you don't expose it to unauthorized users.

You can now proceed with the installation.
Expand All @@ -53,14 +55,10 @@ The operator can be installed using the provided [Helm chart](https://github.com

### Directly using the operator manifest

The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.

#### Install the EDB pull secret

Before installing EDB Postgres for Kubernetes, you need to create a pull secret for EDB software in the `postgresql-operator-system` namespace.


The pull secret needs to be saved in the namespace where the operator will reside. Create the `postgresql-operator-system` namespace using this command:

```shell
Expand All @@ -83,16 +81,14 @@ through a YAML manifest applied via `kubectl`.

There are two different manifests available depending on your subscription plan:

- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.24.1.yaml).
- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.24.1.yaml).
- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.24.1.yaml).
- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.24.1.yaml).

You can install the manifest for the latest version of the operator by running:
You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.24.1.yaml)
for this minor release as follows:

```sh
kubectl apply --server-side -f \
https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.24.1.yaml
https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.24.1.yaml
```

You can verify that with:
Expand Down Expand Up @@ -130,7 +126,6 @@ for a more comprehensive example.
one of the allowed ones, or open the webhooks' port (`9443`) on the
firewall.


## Details about the deployment

In Kubernetes, the operator is by default installed in the `postgresql-operator-system`
Expand Down Expand Up @@ -192,13 +187,13 @@ by applying the manifest of the newer version for plain Kubernetes
installations, or using the native package manager of the used distribution
(please follow the instructions in the above sections).

The second step is automatically executed after having updated the controller,
by default triggering a rolling update of every deployed PostgreSQL instance to
use the new instance manager. The rolling update procedure culminates with a
switchover, which is controlled by the `primaryUpdateStrategy` option, by
default set to `unsupervised`. When set to `supervised`, users need to complete
the rolling update by manually promoting a new instance through the `cnp`
plugin for `kubectl`.
The second step is automatically triggered after updating the controller. By
default, this initiates a rolling update of every deployed PostgreSQL cluster,
upgrading one instance at a time to use the new instance manager. The rolling
update concludes with a switchover, which is governed by the
`primaryUpdateStrategy` option. The default value, `unsupervised`, completes
the switchover automatically. If set to `supervised`, the user must manually
promote the new primary instance using the `cnp` plugin for `kubectl`.

!!! Seealso "Rolling updates"
This process is discussed in-depth on the [Rolling Updates](rolling_update.md) page.
Expand All @@ -213,6 +208,21 @@ the instance manager. This approach does not require a restart of the
PostgreSQL instance, thereby avoiding a switchover within the cluster. This
feature, which is disabled by default, is described in detail below.

### Spread Upgrades

By default, all PostgreSQL clusters are rolled out simultaneously, which may
lead to a spike in resource usage, especially when managing multiple clusters.
EDB Postgres for Kubernetes provides two configuration options at the [operator level](operator_conf.md)
that allow you to introduce delays between cluster roll-outs or even between
instances within the same cluster, helping to distribute resource usage over
time:

- `CLUSTERS_ROLLOUT_DELAY`: Defines the number of seconds to wait between
roll-outs of different PostgreSQL clusters (default: `0`).
- `INSTANCES_ROLLOUT_DELAY`: Defines the number of seconds to wait between
roll-outs of individual instances within the same PostgreSQL cluster (default:
`0`).

### In-place updates of the instance manager

By default, EDB Postgres for Kubernetes issues a rolling update of the cluster
Expand Down
Loading

0 comments on commit 15e8427

Please sign in to comment.