Skip to content

Latest commit

 

History

History
601 lines (465 loc) · 18.1 KB

commands.asciidoc

File metadata and controls

601 lines (465 loc) · 18.1 KB

PostgreSQL Operator Commands

pgo Commands

Prior to using pgo, users will need to specify the postgres-operator URL as follows:

kubectl get service postgres-operator
NAME                CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
postgres-operator   10.104.47.110   <none>        8443/TCP   7m
export CO_APISERVER_URL=https://10.104.47.110:8443
pgo version

pgo version

To see what version of pgo client and postgres-operator you are running, use the following:

pgo version

pgo create cluster

To create a database, use the following:

pgo create cluster mycluster

A more complex example is to create a series of clusters such as:

pgo create cluster xraydb --series=3 --labels=project=xray --policies=xrayapp,rlspolicy

In the example above, we provision 3 clusters that have a number appended into their resulting cluster name, apply a user defined label to each cluster, and also apply user defined policies to each cluster after they are created.

You can then view that database as:

pgo show cluster mydatabase

Also, if you like to see JSON formatted output, add the -o json flag:

pgo show cluster mydatabase -o json

The output will give you the current status of the database pod and the IP address of the database service. If you have postgresql installed on your test system you can connect to the database using the service IP address:

psql -h 10.105.121.12 -U postgres postgres

More details are available on user management below, however, you may wish to take note that user credentials are created in the file $COROOT/deploy/create-secrets.sh upon deployment of the Operator. The following user accounts and passwords are created by default for connecting to the PostgreSQL clusters:

username: postgres password: password

username: primaryuser password: password

username: testuser password: password

You can view all databases using the special keyword all:

pgo show cluster all

You can filter the results based on the Postgres Version:

pgo show cluster all --version=9.6.2

You can also add metrics collection to a cluster by using the --metrics command flag as follows:

pgo create cluster testcluster --metrics

This command flag causes a crunchy-collect container to be added to the database cluster pod and enables metrics collection on that database pod. For this to work, you will need to configure the Crunchy metrics example as found in the Crunchy Container Suite.

New clusters typically pick up the container image version to use based on the pgo configuration file’s CCP_IMAGE_TAG setting. You can override this value using the --ccp-image-tag command line flag:

pgo create cluster mycluster --ccp-image-tag=centos7-9.6.5-1.6.0

You can also add a pgpool deployment into a cluster by using the --pgpool command flag as follows:

pgo create cluster testcluster --pgpool

This will cause a crunchy-pgpool container to be started and initially configured for a cluster and the testuser cluster credential. See below for more details on running a pgpool deployment as part of your cluster.

pgo backup

You can start a backup job for a cluster as follows:

pgo backup mycluster

You can view the backup:

pgo show backup mycluster

View the PVC folder and the backups contained therein:

pgo show pvc mycluster-backup-pvc
pgo show pvc mycluster-backup-pvc --pvc-root=mycluster-backups

The output from this command is important in that it can let you copy/paste a backup snapshot path and use it for restoring a database or essentially cloning a database with an existing backup archive.

For example, to restore a database from a backup archive:

pgo create cluster restoredb --backup-path=mycluster-backups/2017-03-27-13-56-49 --backup-pvc=mycluster-pvc --secret-from=mycluster

This will create a new database called restoredb based on the backup found in mycluster-backups/2017-03-27-13-56-49 and the secrets of the mycluster cluster.

Selectors can be used to perform backups as well, for example:

pgo backup  --selector=project=xray

In this example, any cluster that matches the selector will cause a backup job to be created.

When you request a backup, pgo will prompt you if you want to proceed because this action will delete any existing backup job for this cluster that might exist. The backup files will still be left intact but the actual Kubernetes Job will be removed prior to creating a new Job with the same name.

pgo delete backup

To delete a backup enter the following:

pgo delete backup mycluster

pgo delete cluster

You can remove a cluster by running:

pgo delete cluster restoredb

Note, that this command will not remove the PVC associated with this cluster.

Selectors also apply to the delete command as follows:

pgo delete cluster  --selector=project=xray

This command will cause any cluster matching the selector to be removed.

You can remove a cluster and it’s data files by running:

pgo delete cluster restoredb --delete-data

You can remove a cluster, it’s data files, and all backups by running:

pgo delete cluster restoredb --delete-data --delete-backups

When you specify a destructive delete like above, you will be prompted to make sure this is what you want to do. If you don’t want to be prompted you can enter the --no-prompt command line flag.

pgo scale

When you create a Cluster, you will see in the output a variety of Kubernetes objects were created including:

  • a Deployment holding the primary PostgreSQL database

  • a Deployment holding the replica PostgreSQL database

  • a service for the primary database

  • a service for the replica databases

Since Postgres is a single-primary database by design, the primary Deployment is set to a replica count of 1, it can not scale beyond 1.

With Postgres, you can any n-number of replicas each of which connect to the primary forming a streaming replication postgres cluster. The Postgres replicas are read-only, whereas the primary is read-write. To create a Postgres replica enter a command such as:

pgo scale mycluster

The pgo scale command is additive, in that each time you execute it, it will create another replica which is added to the Postgres cluster.

There are 2 service connections available to the PostgreSQL cluster. One is to the primary database which allows read-write SQL processing, and the other is to the set of read-only replica databases. The replica service performs round-robin load balancing to the replica databases.

You can connect to the primary database and verify that it is replicating to the replica databases as follows:

psql -h 10.107.180.159 -U postgres postgres -c 'table pg_stat_replication'

You can view all clusters using the special keyword all:

pgo show cluster all

You can filter the results by Postgres version:

pgo show cluster all --version=9.6.2

The scale command will let you specify a --node-label flag which can be used to influence what Kube node the replica will be scheduled upon.

pgo scale mycluster --node-label=speed=fast

If you don’t specify a --node-label flag, a node affinity rule of NotIn will be specified to prefer that the replica be schedule on a node that the primary is not running on.

You can also dictate what container resource and storage configurations will be used for a replica by passing in extra command flags:

pgo scale mycluster --storage-config=storage1 --resources-config=small

pgo upgrade

You can perform a minor Postgres version upgrade of either a database or cluster as follows:

pgo upgrade mycluster

When you run this command, it will cause the operator to delete the existing containers of the database or cluster and recreate them using the currently defined Postgres container image specified in your pgo configuration file.

The database data files remain untouched, only the container is updated, this will upgrade your Postgres server version only.

You can perform a major Postgres version upgrade of either a database or cluster as follows:

pgo upgrade mycluster --upgrade-type=major

When you run this command, it will cause the operator to delete the existing containers of the database or cluster and recreate them using the currently defined Postgres container image specified in your pgo configuration file.

The database data files are converted to the new major Postgres version as specified by the current Postgres image version in your pgo configuration file.

In this scenario, the upgrade is performed by the Postgres pg_upgrade utility which is containerized in the crunchydata/crunchy-upgrade container. The operator will create a Job which runs the upgrade container, using the existing Postgres database files as input, and output the updated database files to a new PVC.

Once the upgrade job is completed, the operator will create the original database or cluster container mounted with the new PVC which contains the upgraded database files.

As the upgrade is processed, the status of the pgupgrade CRD is updated to give the user some insight into how the upgrade is proceeding. Upgrades like this can take a long time if your database is large. The operator creates a watch on the upgrade job to know when and how to proceed.

Likewise, you can upgrade the cluster using a command line flag:

pgo upgrade mycluster --ccp-image-tag=centos7-9.6.8-1.8.1
pgo upgrade mycluster --upgrade-type=major --ccp-image-tag=centos7-9.6.8-1.8.1

pgo delete upgrade

To remove an upgrade CRD, issue the following:

pgo delete upgrade

pgo show pvc

You can view the files on a PVC as follows:

pgo show pvc mycluster-pvc

In this example, the PVC is mycluster-pvc. This command is useful in some cases to examine what files are on a given PVC.

In the case where you want to list a specific path on a PVC you can specify the path option as follows:

pgo show pvc mycluster-pvc --pvc-root=mycluster-backups

You can also list all PVCs that are created by the operator using:

pgo show pvc all

pgo show cluster

You can view the passwords used by the cluster as follows:

pgo show cluster mycluster --show-secrets=true

Passwords are generated if not specified in your pgo configuration.

pgo test

You can test the database connections to a cluster:

pgo test mycluster

This command will test each service defined for the cluster using the postgres, primary, and normal user accounts defined for the cluster. The cluster credentials are accessed and used to test the database connections. The equivalent psql command is printed out as connections are tried, along with the connection status.

Like other commands, you can use the selector to test a series of clusters:

pgo test --selector=env=research
pgo test all

You can get output using the --output flag:

pgo test all -o json

pgo create policy

To create a policy use the following syntax:

pgo create policy policy1 --in-file=/tmp/policy1.sql
pgo create policy policy1 --url=https://someurl/policy1.sql

When you execute this command, it will create a policy named policy1 using the input file /tmp/policy1.sql as input. It will create on the server a PgPolicy CRD with the name policy1 that you can examine as follows:

kubectl get pgpolicies policy1 -o json

Policies get automatically applied to any cluster you create if you define in your pgo.yaml configuration a CLUSTER.POLICIES value. Policy SQL is executed as the postgres user.

To view policies:

pgo show policy all

pgo delete policy

To delete a policy use the following form:

pgo delete policy policy1

pgo apply

To apply an existing policy to a set of clusters, issue a command like this:

pgo apply policy1 --selector=name=mycluster

When you execute this command, it will look up clusters that have a label value of name=mycluster and then it will apply the policy1 label to that cluster and execute the policy SQL against that cluster using the postgres user account.

Warning
policies are executed as the superuser in PostgreSQL therefore take caution when using them.

If you want to view the clusters than have a specific policy applied to them, you can use the --selector flag as follows to filter on a policy name (e.g. policy1):

pgo show cluster --selector=policy1=pgpolicy

pgo user

To create a new Postgres user to the mycluster cluster, execute:

pgo createa user sally --selector=name=mycluster

To delete a Postgres user in the mycluster cluster, execute:

pgo user --delete-user=sally --selector=name=mycluster

To delete that user in all clusters:

pgo user --delete-user=sally

To change the password for a user in the mycluster cluster:

pgo user --change-password=sally --selector=name=mycluster

The password is generated and applied to the user sally.

To see user passwords that have expired past a certain number of days in the mycluster cluster:

pgo user --expired=7 --selector=name=mycluster

To assign users to a cluster:

pgo create user user1 --valid-days=30 --managed --db=userdb --selector=name=xraydb1

In this example, a user named user1 is created with a valid until password date set to expire in 30 days. That user will be granted access to the userdb database. This user account also will have an associated secret created to hold the password that was generated for this user. Any clusters that match the selector value will have this user created on it.

To change a user password:

pgo user --change-password=user1 --valid-days=10 --selector=name=xray1

In this example, a user named user1 has its password changed to a generated value and the valid until expiration date set to 10 days from now, this command will take effect across all clusters that match the selector. If you specify valid-days=-1 it will mean the password will not expire (e.g. infinity).

To drop a user:

pgo user --delete-user=user3   --selector=project=xray

To see which passwords are set to expire in a given number of days:

pgo user --expired=10  --selector=project=xray

In this example, any clusters that match the selector are queried to see if any users are set to expire in 10 days.

To update expired passwords in a cluster:

pgo user --update-passwords --selector=name=mycluster

pgo label

You can apply a user defined label to a cluster as follows:

pgo label --label=env=research  --selector=project=xray

In this example, we apply a label of env=research to any clusters that have an existing label of project=xray applied.

pgo load

A CSV file loading capability is supported currently. You can test that by creating a SQL Policy which will create a database table that will be loaded with the CSV data. For example:

pgo create policy xrayapp --in-file=$COROOT/examples/policy/xrayapp.sql

Then you can load a sample CSV file into a database as follows:

pgo load --load-config=$COROOT/examples/sample-load-config.json  --selector=name=mycluster

The loading is based on a load definition found in the sample-load-config.json file. In that file, the data to be loaded is specified. When the pgo load command is executed, Jobs will be created to perform the loading for each cluster that matches the selector filter.

If you include the --policies flag, any specified policies will be applied prior to the data being loaded. For example:

pgo load --policies="rlspolicy,xrayapp" --load-config=$COROOT/examples/sample-load-config.json --selector=name=mycluster

Likewise you can load a sample json file into a database as follows:

pgo load --policies=jsonload --load-config=$COROOT/examples/sample-json-load-config.json  --selector=name=mycluster

The load configuration file has the following YAML attributes:

Table 1. Load Configuration File Definitions
Attribute Description

COImagePrefix

the pgo-load image prefix to use for the load job

COImageTag

the pgo-load image tag to use for the load job

DbDatabase

the database schema to use for loading the data

DbUser

the database user to use for loading the data

DbPort

the database port of the database to load

TableToLoad

the PostgreSQL table to load

FilePath

the name of the file to be loaded

FileType

either csv or json, determines the type of data to be loaded

PVCName

the name of the PVC that holds the data file to be loaded

SecurityContext

either fsGroup or SupplementalGroup values

pgo failover

Starting with Release 2.6, there is a manual failover command which can be used to promote a replica to a primary role in a PostgreSQL cluster.

This process includes the following actions: * pick a target replica to become the new primary * delete the current primary deployment to avoid user requests from going to multiple primary databases (split brain) * promote the targeted replica using pg_ctl promote, this will cause PostgreSQL to go into read-write mode * re-label the targeted replica to use the primary labels, this will match the primary service selector and cause new requests to the primary to be routed to the new primary (targeted replica)

The command works like this:

pgo failover mycluster --query

That command will show you a list of replica targets you can choose to failover to. You will select one of those for the following command:

pgo failover mycluster --target=mycluster-abxq

There is a CRD called pgtask that will hold the failover request and also the status of that request. You can view the status by viewing it:

kubectl get pgtasks mycluster-failover -o yaml

Once completed, you will see a new replica has been started to replace the promoted replica, this happens automatically due to the re-lable, the Deployment will recreate its pod because of this. The failover typically takes only a few seconds, however, the creation of the replacement replica can take longer depending on how much data is being replicated.