- 1. End User Quick Start using ER-Demo Operator
- 2. ER-Demo Ansible Playbook Development
- 3. ER-Demo Operator Image Upgrade
- 4. Operator Development and Test
Please see the ER-Demo Install Guide.
This ER-Demo operator is based on an ansible playbook that is included in this git project. Optionally for development purposes, this ansible can be executed directly from this project (rather than having the ER-Demo operator execute the ansible). In addition, changes to this ansible can be made and subsequent releases of this operator will incorporate these new ansible changes.
The following sections provide details on how to execute the included ansible playbook to provision an ER-Demo environment :
To install the Emergency Response application, you will need the following tools on your local machine:
-
Unix flavor OS with BASH shell: ie; Fedora, RHEL, CentOS, Ubuntu, OSX
-
git
-
Installation of the Emergency Response application is tested using the ansible-playbook utility from the ansible package of Fedora 31. Others in the community have also succeeded in installing the app using ansible on OSX.
The Emergency Response application makes use of a third-party SaaS API called MapBox. MapBox APIs provide the Emergency Response application with an optimized route for a responder to travel given pick-up and drop-off locations. To invoke its APIs, MapBox requires an access token. For normal use of the Emergency Response application the free-tier account provides ample rate-limits.
-
Using the oc utility on your local machine, ensure that you are authenticated in your OCP 4 environment as a cluster-admin.
-
OPTIONAL: Checkout the latest tag of this project:
git checkout 2.11
-
Set a shell environment variable of your map box token:
# MapBox API token, see https://docs.mapbox.com/help/how-mapbox-works/access-tokens/ export map_token=pk.egfdgewrthfdiamJyaWRERKLJWRIONE23czdXBrcW5mbmg0amkifQ.iBEb0APX1Vmo-2934rj
You will now execute the ansible to install the Emergency Response application. Select one of the following two approaches: Pre-built Images or CI/CD
This approach is best suited for those that want to utilize (ie: for a customer demo) the Emergency Response app. This installation approach does not use Jenkins pipelines. Instead, the OpenShift deployments for each component of the Emergency Response application are started using pre-built Linux container images pulled from corresponding public Quay image repositories. With this approach, the typical duration to build the Emergency Response app is about 20 minutes. This is the default installation approach.
-
Kick-off the Emergency Response app provisioning using pre-built container images in quay:
$ ansible-playbook playbooks/install.yml -e map_token=$map_token
-
After about 20 minutes, you should see ansible log messages similar to the following:
PLAY RECAP ******************************************************************************** localhost : ok=432 changed=240 unreachable=0 failed=0 skipped=253 rescued=0 ignored=0
This approach is best suited for code contributors to the Emergency Response app. Individual Jenkins pipelines are provided for each component of the app. Each pipeline builds from source, tests, creates a Linux container image and deploys that image to OpenShift. The typical duration to build the Emergency Response application from source using these pipelines is about an hour. This approach is also of value if the focus of a demo to a customer and/or partner is to introduce them to CI/CD best practices of a microservice architected application deployed to OpenShift.
-
kick-off the Emergency Response app provisioning:
ansible-playbook playbooks/install.yml \ -e map_token=$map_token \ -e deploy_from=source
-
After about an hour, the provisioning should be complete.
-
You can review any of the CI/CD pipelines that ran as part of this provisioning process:
-
List all build pipelines:
oc get build -n user1-er-tools | grep pipeline ... user1-incident-service-pipeline-1 JenkinsPipeline Complete 2 hours ago user1-mission-service-pipeline-1 JenkinsPipeline Complete 2 hours ago user1-responder-service-pipeline-1 JenkinsPipeline Complete 2 hours ago user1-assignment-rules-model-pipeline-1 JenkinsPipeline Complete 2 hours ago
-
From that list, pick any of the builds to get the URL to the corresponding Jenkins pipeline:
oc logs user2-incident-service-pipeline-1 -n user1-er-tools info: logs available at https://jenkins-user2-er-tools.apps.cluster-denver-8ab6.denver-8ab6.example.opentlc.com/blue/organizations/jenkins/user2-er-tools%2Fuser2-er-tools-user2-mission-service-pipeline/detail/user2-er-tools-user2-mission-service-pipeline/1/
-
Considering that the automated provisioning of ER-Demo consists of many hundreds of discrete steps, errors can occur from time to time.
The good news is that at least you have full cluster-admin access to your OpenShift cluster to troubleshoot. This is where your OpenShift skills will be of great assistance.
Normally you can get a ball-park approximation as to where the provisioning failed by studying the ansible output. ie:
TASK [../roles/openshift_process_service : deploy postgresql] ****************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": {"cmd": "oc create -f /tmp/process-service-4LN1V4/postgresql.yml -n user1-er-demo", "results": {}, "returncode": 1, "stderr": "Error from server (AlreadyExists): persistentvolumeclaims \"process-service-postgresql\" already exists\n", "stdout": "service/process-service-postgresql created\ndeploymentconfig.apps.openshift.io/process-service-postgresql created\n"}}
NO MORE HOSTS LEFT ***********************************************************************************************************************************************************************************************
PLAY RECAP *******************************************************************************************************************************************************************************************************
localhost : ok=16 changed=8 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
The above log alludes to a problem provisioning the postgresql database for the process-service. For further troubleshooting assistance, feel free to reach out to the community at: emer-demo-team at redhat dot com .
Also of good news is that the ansible itself is idempotent. In particular, it is feasible to re-run the ansible more than one time without introducing unintended consequences.
As a short-cut, you may not have to re-run the entire install playbook. Included are playbooks for each service that make up the ER-Demo.
$ ls playbooks/
assignment_rules_model.yml emergency_console.yml incident_service.yml kafka_lag_exporter.yml monitoring.yml responder_client_app.yml
assignment_rules.yml erd_monitoring.yml install.yml kafka_topics.yml nexus.yml responder_service.yml
datagrid.yml find_service.yml jenkins.yml knative.yml pgadmin4.yml responder_simulator_service.yml
datawarehouse.yml group_vars kafdrop.yml library postgresql.yml sso_realm.yml
disaster_service.yml incident_priority_service.yml kafka_cluster.yml mission_service.yml process_service.yml sso.yml
disaster_simulator_service.yml incident_process.yml kafka_connect.yml module_utils process_viewer.yml strimzi_operator.yml
For example, to re-install just the process-service, the following could be executed:
$ ansible-playbook playbooks/process_service.yml -e ACTION=uninstall
$ ansible-playbook playbooks/process_service.yml
To uninstall:
$ ansible-playbook \
playbooks/install.yml \
-e ACTION=uninstall \
-e uninstall_cluster_resources=true \
-e uninstall_delete_project=true
The ER-Demo ansible provisioning allows for more than one ER-Demo installations per OpenShift cluster. This becomes useful, for example, when using the ER-Demo as the basis of customer and/or partner workshops where each student would be assigned their own demo environment.
Its often the case that a fresh OpenShift cluster can accommodate more than one ER-Demo environment. Mileage will vary depending on the amount of available hardware resources. For example, the OCP4 for ER-Demo cluster available through RHPDS can typically run about 5 concurrent ER-Demo environments.
The ansible provisioning segregates ER-Demo environments on the same OpenShift cluster by OpenShift users. This can be done by applying the project_admin environment variable to each ansible command as follows:
-
Set an environment variable that reflects the userId of your non cluster-admin user. ie:
OCP_USERNAME=user2
-
From the install/ansible directory, kick-off the Emergency Response app provisioning:
ansible-playbook playbooks/install.yml \ -e map_token=$map_token \ -e project_admin=$OCP_USERNAME
-
To uninstall:
$ ansible-playbook playbooks/install.yml \ -e ACTION=uninstall \ -e project_admin=$OCP_USERNAME \ -e uninstall_cluster_resources=true \ -e uninstall_delete_project=true
This section describes the procedure for updating the ER-Demo Operator container images using an updated ER-Demo ansible playbook.
There are 3 ER-Demo related container images that are updated during this process:
- quay.io/emergencyresponsedemo/erdemo-operator-bundle
- quay.io/emergencyresponsedemo/erdemo-operator-catalog
- quay.io/emergencyresponsedemo/erdemo-operator
-
Install the ansible based operator-sdk and check its version at the command line:
$ operator-sdk version operator-sdk version: "v1.11.0", commit: "28dcd12a776d8a8ff597e1d8527b08792e7312fd", kubernetes version: "1.20.2", go version: "go1.16.7", GOOS: "linux", GOARCH: "amd64
-
Install the opm utility and check its version at the command line:
$ opm version Version: version.Version{OpmVersion:"v1.15.4-16-g06e950de", GitCommit:"06e950de5ebca66e493f6cd2414e73c8978090d3", BuildDate:"2021-07-30T00:38:34Z", GoOs:"linux", GoArch:"amd64"
-
Install podman and buildah utilities and check versions:
$ podman version Version: 3.2.3 API Version: 3.2.3 $ buildah version Version: 1.22.0 Go Version: go1.16.6 Image Spec: 1.0.1-dev Runtime Spec: 1.0.2-dev CNI Spec: 0.4.0 libcni Version: v0.8.1 image Version: 5.15.0
-
Clone this git project and change directories into the project:
$ git clone https://github.com/Emergency-Response-Demo/erdemo-operator.git \ && cd erdemo-operator
-
Set environment variables called:
VERSION
andIMG
:$ export VERSION=2.12.0 # Version of desired release of ER-Demo. Note: version must be of convention: Major.Minor.Patch $ export IMG=quay.io/emergencyresponsedemo/erdemo-operator:$VERSION
-
Create the
erdemo-operator-bundle
container image:-
Make the bundle
$ make bundle
-
Comment out the
replaces
attribute inbundle/manifests/erdemo-operator.clusterserviceversion.yaml
(at about line 208 ..... if it exists)# replaces: erdemo-operator.v2.10.2
-
Execute:
$ make bundle-build $ podman tag localhost/erdemo-operator-bundle:$VERSION quay.io/emergencyresponsedemo/erdemo-operator-bundle:$VERSION $ podman push quay.io/emergencyresponsedemo/erdemo-operator-bundle:$VERSION
-
-
Create the
erdemo-operator-catalog
container image:$ opm index add -p podman --bundles quay.io/emergencyresponsedemo/erdemo-operator-bundle:$VERSION --tag quay.io/emergencyresponsedemo/erdemo-operator-catalog:$VERSION $ podman push quay.io/emergencyresponsedemo/erdemo-operator-catalog:$VERSION
-
Create the
erdemo-operator
container image:$ buildah bud -f Dockerfile -t quay.io/emergencyresponsedemo/erdemo-operator:$VERSION . $ podman push quay.io/emergencyresponsedemo/erdemo-operator:$VERSION
-
move
the `latest' tag of each of the three ER-Demo operator images in quay.io to this latest $VERSION. ie: quay.io/emergencyresponsedemo/erdemo-operator:latest should link to quay.io/emergencyresponsedemo/erdemo-operator:$VERSION
To install:
- Clone this repository and
cd
into it.
git clone https://github.com/Emergency-Response-Demo/erdemo-operator
cd erdemo-operator
- Ensure you're logged in with
oc
as acluster-admin
- Run
hack/operate.sh
to install theCustomResourceDefinition
and accompanying assets, and to create aDeployment
for the operator. It will be created in theerdemo-operator-system
namespace. - Create an
ErDemo
Custom Resource:oc apply -n erdemo-operator-system -f config/samples/apps_v1alpha1_erdemo.yaml
- Watch the progress in the logs of the
erdemo-operator-controller-manager
pod.