A very flexible and customizable Operator in Go developed using the Operator Framework to package, install, configure and manage a Database database.
The following prerequisites are just required if you would like to install it as standalone (without OLM).
Install Golang v1.13+ |
Access to a Kubernetes v1.16.0+ cluster. |
ℹ️
|
If you haven’t a access to a Kubernetes v1.16.0+ cluster. It is recommend the usage of Minikube for local tests and development purposes. You may would like to check the blog posts How to getting started with Kubernetes?. |
The following prerequisite is just required if you would like to contribute with.
ℹ️
|
The following steps will allow you use this project as standalone (without OLM). |
By the following commands you will create a local directory and clone this project.
$ git clone [email protected]:dev4devs-com/postgresql-operator.git $GOPATH/src/github.com/dev4devs-com/postgresql-operator
Use the following command to install the Operator and Database
ℹ️
|
To install you need be logged in as a user with cluster privileges like the system:admin user.
|
$ make install
To verify that the installation was successful completed you can check the Database Status
field of Database CRD in the cluster. The expected result when all was installed with success is OK
.
$ kubectl describe Database -n postgresql-operator | grep Status
...
Status:
Database Status: OK
...
Now is time to check if all the resources are available as expected, running kubectl get all -o wide -n postgresql-operator
one should see a result similar to the one below
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/database-5485c7f8d4-6khcw 1/1 Running 1 14m 172.17.0.9 localhost <none>
pod/postgresql-operator-6b6869c474-n75bc 1/1 Running 0 14m 172.17.0.6 localhost <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/database ClusterIP 172.30.87.113 <none> 5432/TCP 14m cr=database,owner=postgresqloperator
service/postgresql-operator-metrics ClusterIP 172.30.111.231 <none> 8383/TCP,8686/TCP 14m name=postgresql-operator
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/database 1 1 1 1 14m database centos/postgresql-96-centos7 cr=database,owner=postgresqloperator
deployment.apps/postgresql-operator 1 1 1 1 14m postgresql-operator quay.io/dev4devs-com/postgresql-operator:master name=postgresql-operator
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/database-5485c7f8d4 1 1 1 14m database centos/postgresql-96-centos7 cr=database,owner=postgresqloperator,pod-template-hash=1041739480
replicaset.apps/postgresql-operator-6b6869c474 1 1 1 14m postgresql-operator quay.io/dev4devs-com/postgresql-operator:master name=postgresql-operator,pod-template-hash=2624257030
This operator can be installed by the OperatorHub.io. Note that application image is deployed at https://quay.io/repository/dev4devs-com/postgresql-operator
Tip: If for some reason you have problems interacting with quay.io
, go to "https://quay.io/user/<YOUR USER>?tab=settings → Generate Encrypted Password", type your user password and follow the instructions for the specific option that better suits you.
By the specs in Database CR(deploy/crds/postgresql.dev4devs.com_databases_cr.yaml) you are able to customize the setup for this operator. Note that by the spec configMapName
you are able to inform the name of a configMapName which has the keys and values which the Database should use in its required env vars.
If you inform only the name of the configMap at configMapName
, then it will look for the values stored with the same keys required for each image env var used for its database version (databaseName
, databasePassword
, databaseUser
). However, you are able to customize the keys as well by using the optional specs; configMapDatabaseName
, configMapDatabasePassword
, configMapDatabaseUser
. This way, this operator will be able to look for the values stored in some config with keys which are not the ones used to create the environment variables used in the database deployment.
By using the command make install
as it is, the default namespace will be postgresql-operator
, defined in the Makefile file, it will be created and the operator installed in this namespace. You are able to install the operator in another namespace if you wish, however, you need to set up its roles (RBAC) in order to apply them on the namespace where the operator will be installed. The namespace name needs to be changed in the Cluster Role Binding(/deploy/role_binding.yaml) file. Note, that you also need to change the namespace in the Makefile in order to use the command make install
with a different namespace.
# Replace this with the namespace where the operator will be deployed (optional).
namespace: postgresql-operator
The backup service is implemented by using integr8ly/backup-container-image. It will do the database backup to be restored later in case of failures. Following the steps to enable it.
-
Setup an AWS S3 Bucket in order to store the backup outside of the cluster. You need to add your AWS details to Backup CR(deploy/crds/postgresql.dev4devs.com_v1alpha1_backup_cr.yaml) as following or add the name of the secret which has already this data in the cluster.
# --------------------------------- # Stored Host - AWS # ---------------------------- awsS3BucketName: "example-awsS3BucketName" awsAccessKeyId: "example-awsAccessKeyId" awsSecretAccessKey: "example-awsSecretAccessKey"
❗You can add the name of the secret which is created already in the cluster. ❗You need to create the bucket yourself -
Run the command
make install-backup
in the same namespace where the Database is installed in order to apply the CronJob which will do this process.
ℹ️
|
To install you need be logged in as a user with cluster privileges like the system:admin user.
|
To verify if the backup has been successfully created you can run the following command in the namespace where the operator is installed.
$ kubectl get cronjob.batch/backup -n postgresql-operator
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
backup 0 * * * * False 0 13s 12m
To check the jobs executed you can run the command kubectl get jobs -n postgresql-operator
in the namespace where the operator is installed as in the following example.
$ kubectl get jobs -n postgresql-operator
NAME DESIRED SUCCESSFUL AGE
backup-1561588320 1 0 6m
backup-1561588380 1 0 5m
backup-1561588440 1 0 4m
backup-1561588500 1 0 3m
ℹ️
|
In the above example the schedule was made to run this job each minute (*/1 * * * * )
|
To check the logs and troubleshooting you can run the command kubectl logs $podName -f -n postgresql-operator
in the namespace where the operator is installed as the following example.
$ kubectl logs job.batch/backup-1561589040 -f -n postgresql-operator
dumping postgresql
dumping postgres
==> Component data dump completed
/tmp/intly/archives/postgresql.postgresql-22_46_06.pg_dump.gz
WARNING: postgresql.postgresql-22_46_06.pg_dump.gz: Owner username not known. Storing UID=1001 instead.
upload: '/tmp/intly/archives/postgresql.postgresql-22_46_06.pg_dump.gz' -> 's3://camilabkp/backups/postgresql/postgres/2019/06/26/postgresql.postgresql-22_46_06.pg_dump.gz' [1 of 1]
1213 of 1213 100% in 1s 955.54 B/s done
ERROR: S3 error: 403 (RequestTimeTooSkewed): The difference between the request time and the current time is too large.
Following the steps required to be performed a database restore based in the backup service.
-
Install the Database by following the steps in [Installing].
-
Restore the database with the dump which was stored in the AWS S3 Bucket.
ℹ️To restore we should run gunzip -c filename.gz | psql dbname
This operator is cluster-scoped
. For further information see the Operator Scope section in the Operator Framework documentation. Also, check its roles in Deploy directory.
ℹ️
|
The operator and database will be installed in the namespace postgresql-operator which will be created by this project.
|
CustomResourceDefinition |
Description |
Packages, manages, installs and configures the Database on the cluster. |
|
Packages, manages, installs and configures the CronJob to do the backup using the image backup-container-image |
-
Resource
Description
Define the Deployment resource of Database. (E.g container and resources definitions)
Define the PersistentVolumeClaim resource used by its Database.
Define the Service resource of Database.
-
Resource
Description
Define the CronJob resources in order to do the Backup.
Define the database and AWS secrets resources created.
-
Status
Description
databaseStatus
For this status is expected the value
OK
which means that all required objects are created.deploymentStatus
Deployment Status from ks8 API (appsv1.DeploymentStatus).
serviceStatus
Deployment Status from ks8 API (v1core.ServiceStatus).
PersistentVolumeClaimStatus
PersistentVolumeClaim Status from ks8 API (persistentvolumeclaimstatus[v1core.PersistentVolumeClaimStatus])
-
Status
Description
backupStatus
Should show
OK
when everything is created successfully.cronJobName
Name of cronJob resource created by it.
cronJobStatus
CronJob Status from ks8 API (v1beta1.CronJobStatus).
dbSecretName
Name of database secret resource created in order to allow the integr8ly/backup-container-image connect to the database .
awsSecretName
Name of AWS S3 bucket secret resource used in order to allow the integr8ly/backup-container-image connect to AWS to send the backup .
awsCredentialsSecretNamespace
Namespace where the backup image will looking for the of the Aws Secret used.
encryptKeySecretName
Name of the EncryptKey used.
encryptKeySecretNamespace
Namespace where the backup image will looking for the of the EncryptKey used.
hasEncryptionKey
Expected true when it was configured to use an EncryptnKey secret
isDatabasePodFound
The value expected here is true which shows that the database pod was found.
isDatabaseServiceFound
The value expected here is true which shows that the database service was found.
Run the following command to setup this project locally.
$ make setup
ℹ️
|
It is using go modules to manage dependencies. |
The following command will install the operator in the cluster and run the changes performed locally without the need to publish a dev
tag. In this way, you can verify your code in the development environment.
$ make code-run-local
❗
|
The local changes are applied when the command operator-sdk up local --namespace=postgresql-operator is executed then it is not a hot deploy and to get the latest changes you need re-run the command.
|
By the following commands you are able to connect to the Database. It is possible to check it through the OpenShift UI in the Database’s pod terminal.
# Login into the the Postgres
psql -U postgres
# To connect into the default database
\c <database-name>
# To list the tables
\dt
Follow the below steps to debug the project in some IDEs.
ℹ️
|
The code needs to be compiled/built first. |
$ make setup-debug
$ cd cmd/manager/
$ dlv debug --headless --listen=:2345 --api-version=2
Then, debug the project from the IDE by using the default setup of Go Remote
option.
$ make setup-debug
$ dlv --listen=:2345 --headless=true --api-version=2 exec ./build/_output/bin/postgresql-operator-local --
debug the project using the following Visual Studio Code launch config.
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "test",
"type": "go",
"request": "launch",
"mode": "remote",
"remotePath": "${workspaceFolder}/cmd/manager/main.go",
"port": 2345,
"host": 1.0.0,
"program": "${workspaceFolder}",
"env": {},
"args": []
}
]
}
Command |
Description |
|
Creates the |
|
Uninstalls the operator and DB. Deletes the |
|
Installs the backup Service in the operator’s namespace |
|
Uninstalls the backup Service from the operator’s namespace. |
|
Runs the operator locally for development purposes. |
|
Sets up environment for debugging proposes. |
|
Examines source code and reports suspicious constructs using vet. |
|
Formats code using gofmt. |
|
It will automatically generated/update the files by using the operator-sdk based on the CR status and spec definitions. |
|
It will tun the dev commands to check, fix and generated/update the files. |
|
Used by CI to build operator image from |
|
Used by CI to push the |
|
Used by CI to build operator image from a tagged commit and add |
|
Used by CI to push the |
|
Runs test suite |
|
Run coverage check |
|
Compile image for tests |
|
Run locally e2e tests (Required have cluster installed locally) |
ℹ️
|
The Makefile is implemented with tasks which you should use to work with. |
Images are automatically built and pushed to our image repository in the following cases:
-
For every change merged to master a new image with the
master
tag is published. -
For every change merged that has a git tag a new image with the
<operator-version>
andlatest
tags are published.
If the image does not get built and pushed automatically the job may be re-run manually via the CI dashboard.
Following the steps
-
Create a new version tag following the semver, for example
0.1.0
-
Bump the version in the version.go file.
-
Update the the CHANGELOG.MD with the new release.
-
Create a git tag with the version value, for example:
$ git tag -a 0.1.0 -m "version 0.1.0"
-
Push the new tag to the upstream repository, this will trigger an automated release by the CI, for example:
$ git push upstream 0.1.0
ℹ️
|
The image with the tag will be created and pushed to the postgresql-operator image hosting repository by the CI. |
|
Do not use letters in the tag such as v . It will not work.
|
Use the following command.
-
Generate the OLM files by running the following command, for example:
operator-sdk olm-catalog gen-csv --csv-version 0.1.0 --update-crds
-
Test the changes locally as describe in Community Operators
ℹ️
|
See here some examples /deploy/olm-catalog/olm-test which can be used to test it. |
ℹ️
|
You can use the command operator-sdk scorecard to check it locally. Update the config file with the latest changes
|
This operator was developed using the Kubernetes APIs in order to be compatible with OpenShift and Kubernetes.
All contributions are hugely appreciated. Please see our Contribution Guide for guidelines on how to open issues and pull requests. Please check out our Code of Conduct too.