Table of Contents
- Development
- Services
- OGDS synchronization
- Updating translations
- Theme Development
- Updating the history file
- Updating API docs
- Versions
- Scripts
- Creating policies
- Tests
- Testserver
- Testing Inbound Mail
- Deployment
- Nightly Jobs
- API Error handling
To get a basic development installation, clone the repository, symlink the development buildout config file, create a virtualenv, install and run buildout:
$ git clone [email protected]:4teamwork/opengever.core.git $ cd opengever.core $ ln -s development.cfg buildout.cfg $ virtualenv-2.7 . $ bin/pip install zc.buildout==2.13.3 $ bin/buildout
Run the required services with Docker Compose:
$ docker compose up -d
Now you can run opengever.core in the foreground:
$ bin/instance fg
Python 2.7 and Docker are required for local development.
Building psycopg2
, the PostgreSQL database adapter for Python, requires
the PostgreSQL client library and development files.
The Python ldap module requires the OpenLDAP 2.x client libraries and development files.
Building Pillow
requires at least libjpeg
and zlib
libraries and
development files.
LDAP and AD plugins get configured as usual, using an ldap_plugin.xml
file
in the profile of the respective policy package - with one exception:
Credentials for the LDAP service (bind DN and bind password) should not be
checked in in the ldap_plugin.xml
file. Instead they can be stored machine-wide
in a file ~/.opengever/ldap/{hostname}.json
where {hostname}
refers to
the hostname of the LDAP server.
When an OpenGever client then is created using opengever.setup
, the
credentials are read from that file and configured for the LDAPUserFolder as
well as the active LDAP connection.
E.g. to store the credentials for the 4teamwork LDAP server create the following file:
~/.opengever/ldap/ldap.4teamwork.ch.json
with these contents:
{ "ldap":{ "user":"<bind_dn>", "password":"<bind_pw>" } }
<bind_dn>
and <bind_pw>
refer to the username and password for the
respective user in our development LDAP tree.
For development a local LDAP server is used by default, that doesn't require a credentials file.
Solr is provided as a Docker image and started with other services using docker-compose.
For Solr we need to build multi-platform images, which we do using buildx. First time you will need to create a builder:
docker buildx create --name mybuilder --bootstrap --use
After that you can create a new image using
docker buildx build --platform linux/amd64,linux/arm64 -f ./docker/solr/Dockerfile -t 4teamwork/ogsolr:latest --push .
We also want to give that version another tag than latest. For this you can use regctl (binary can be downloaded from github, see https://github.com/regclient/regclient/blob/main/docs/regctl.md)
regctl image copy 4teamwork/ogsolr:latest 4teamwork/ogsolr:8.11-2
The custom Solr update chain allows to propagate document updates to another Solr. This can be enabled for specific portal types.
A StatelessScriptUpdateProcessor with the name sync.chain
provides a script that is using a JavaScript Script to sync the documents.
To activate the sync.chain
, create a configoverlay.json file in the conf directory of the Solr core or if you are using Buildout provide an overlayconfig using the overlayconfig
option in the ftw.recipe.solr
.
See https://github.com/4teamwork/ftw.recipe.solr#supported-options for more information.
In order for the StatelessScriptUpdateProcessor to work, add the following overlayconfig under the solr section in the buildout.cfg.
configoverlay = { "initParams": { "/update/**,/query,/select,/spell": { "name":"/update/**,/query,/select,/spell", "path":"/update/**,/query,/select,/spell", "defaults": { "update.chain":"sync.chain", "df":"SearchableText" } } } }
When the sync.chain
UpdateRequestProcessorChain is activated, the remoteCoreURL
and portalTypes
option has to be set in the buildout.cfg
. The portalTypes
options is a comma separated list of portal_types to sync.
This is done by using the jvm-opts
option:
[solr] jvm-opts = -Xms512m -Xmx512m -Xss256k -DremoteCoreURL=http://localhost:8984/solr/ris -DportalTypes=opengever.document.document,opengever.dossier.businesscasedossier
Note the other options next to -DremoteCoreURL
. These are options from https://github.com/4teamwork/ftw.recipe.solr#supported-options.
All the defaults from the jvm-opts
section have to be set here again to not override the defaults.
Because automated testing is hard, the tests have to be done manually. This section documents the steps required to do the test setup involving two Solr instances. The manual test will determine whether the relevant documents are propagated to a remote Solr.
- Install the RIS Solr from https://github.com/4teamwork/ris-solr#lokale-entwicklung
- Change the RIS Solr port to
8984
in the buildout.cfg:
[solr] port = 8984
- Configure the GEVER Solr as documented under Activating Solr update chain
- Start GEVER, GEVER Solr and RIS Solr
- Go to http://localhost:8984/ and select the
ris
Solr core - Make a query with
q=*:*
and no active filters - As a result there should be no search results
- Go to http://localhost:8080/fd/ordnungssystem/fuehrung/kommunikation/allgemeines/dossier-1 and change the dossiertitle from
Jahresdossier 2015
toJahresdossier 2017
- Go back to the RIS Solr and make a query with
q=Title:Jahresdossier 2017
and no active filters - As a result the dossier with the title
Jahresdossier 2017
should appear - Go to http://localhost:8080/fd/ordnungssystem/fuehrung/kommunikation/allgemeines/dossier-1/document-1 and change the documenttitle from
Jahresdokument
toJahresdokument RIS
- Go back to the RIS Solr and make a query with
q=Title:Jahresdokument RIS
and no active filters - As a result the document with the title
Jahresdokument RIS
should appear - Go to http://localhost:8080/fd/ordnungssystem/fuehrung/gemeinderecht/dossier-16/task-1 and change the tasktitle from
Testaufgabe
toTestaufgabe RIS
- Go back to the RIS Solr and make a query with
q=Title:Testaufgabe RIS
and no active filters - As a result there should be no search results
- Go to http://localhost:8080/fd/ordnungssystem/fuehrung/kommunikation/allgemeines and create a new dossier with the title
Testdossier RIS
and selectdavid.erni
as responsible - Go back to the RIS Solr and make a query with
q=Title:Testdossier RIS
and no active filters - As a result the dossier with the title
Testdossier RIS
should appear
If you need a multi-admin environment, make sure the basic development dependencies above are satisfied and run the following steps:
Pleace note that the default database-name for multi-admin environment is opengever-multi-admin
$ git clone [email protected]:4teamwork/opengever.core.git $ cd opengever.core $ ln -s development-multi-admin.cfg buildout.cfg $ python bootstrap.py $ bin/buildout $ bin/start_all
Go to http://localhost:8080/manage_main
and click on Install OneGov GEVER
,
For the first admin-unit choose the following settings:
Property | Value |
---|---|
Deployment profile | Choose the Finanzdirektion (FD) (DEV) |
LDAP configuration profile | OneGovGEVER-Demo LDAP |
Import users from LDAP into OGDS | True |
Development mode | False |
Purge SQL | True |
For the second admin-unit choose the following settings:
Property | Value |
---|---|
Deployment profile | Choose the Ratskanzlei (RK) (DEV) |
LDAP configuration profile | OneGovGEVER-Demo LDAP |
Import users from LDAP into OGDS | False |
Development mode | False |
Purge SQL | False |
After installing both admin-units, you have to set a shared session-secret to share login-sessions between admin-units. To do this, do the following steps for both admin-units:
- Goto:
{admin-unit}/acl_users/session/manage_secret
- Set a
Shared secret
Then make sure you can login without cas re-enabling ldap as authentication plugin:
- Go to
{admin-unit}/acl_users/ldap/manage_activateInterfacesForm
- Make sure
Authentication
is enabled
It is also wise to change the CAS server URL. If you want to be able to use the gever-ui, you should set it to empty string, otherwise the frontend will try to login with CAS:
- Go to
{admin-unit}/acl_users/cas_auth/manage_config
- Set
CAS Server URL
to empty string
Lastly you have to change the admin-unit urls in the database to localhost.
- Table:
admin_units
- Properties:
site_url
andpublic_url
PostgreSQL-Example:
UPDATE admin_units SET site_url = replace("site_url", 'https://dev.onegovgever.ch', 'http://localhost:8080'), public_url = replace("public_url", 'https://dev.onegovgever.ch', 'http://localhost:8080');
In preparation for dockerizing opengever.core
, parts of the application are
extracted into dockerized services.
Currently the following services are available as Docker images and are used for local development by default:
- msgconvert
- pdflatex
- Sablon
- Solr
- ogds (PostgreSQL server)
- ogds-sync
- ldap (OpenLDAP server)
To run these services, Docker is required. See Get Docker for how to install Docker on your local machine.
A Docker Compose file is provided in this repo to easily run the services.
To customize configuration settings of the Docker services, create a .env
file
and set the environment variables accordingly. A sample is provided in .env.sample
.
To start the services simply run:
docker-compose up -d
opengever.core
will use the services if the service URL is configured
through environment variables. The development.cfg
buildout configuration
defines these variables by default:
MSGCONVERT_URL=http://localhost:8090/ SABLON_URL=http://localhost:8091/ PDFLATEX_URL=http://localhost:8092/
To disable the use of a service, simply remove the according environment variable or set it to an empty value.
In addition to the services described above, a local Ianus service is also
provided, but not started by default. In order to start it, run the
ianus
profile from the compose file:
docker-compose --profile ianus up
(Make sure you're also signed in to ghcr.io, otherwise docker won't be able to pull the Ianus image)
Next, create an admin user for the Ianus portal:
docker-compose exec -ti ianus-backend python manage.py createsuperuser
Your local portal's frontend should now be accessible at http://localhost:3000/portal
In order for GEVER to use the portal with CAS authentication, you also need to install and configure the CAS auth plugin.
Go to http://localhost:8080/fd/portal_setup/manage_main, switch to the Import
tab, and import the profile with the title
opengever.setup: casauth ianus portal
Then go to http://localhost:8080/fd/acl_users/cas_auth/manage_config to configure the plugin. Make sure that the CAS Server URL points to the local Ianus portal's CAS endpoint. That should be http://localhost:3000/portal/cas if you started the portal using the compose file.
In order for the portal to authenticate users for GEVER via CAS, we also need to register a Service in the portal. For local development, it's easiest to create a simple catch-all service:
- Go to http://localhost:3000/portal/admin/ and log in with the admin account.
- Create a new CAS
Service
- Name:
all
- Pattern:
.*
.
Now sign out of both the portal and GEVER.
When you now visit GEVER at http://localhost:8080/fd it should allow you to sign in via CAS using the portal.
Finally, you may want to create an App in Ianus (e.g. for development on the app switcher or InterGEVER):
- In the Ianus admin interface, add a new
Application
- URL:
http://localhost:8080/fd
- Type:
GEVER
- Group:
cn=og_demo-ftw_users,ou=dev,ou=groups,dc=dev,dc=onegovgever,dc=ch
For quick lookups for user information and metadata (that isn't relevant for security), we keep a mirrored list of users, groups, and group memberships in SQL tables in the OGDS.
Among other things, this list of users is used to determine what users are valid assignees for various objects: If a user was removed from the LDAP, he is still supposed to be a valid assignee for existing objects, but should not be suggested for selection for newly created objects.
Therefore users that are already contained in the SQL tables but have
disappeared from LDAP are not removed from SQL, but instead flagged as
inactive
upon synchroniszation.
There's several different ways to perform the OGDS synchronization:
- It can be triggered manually from the
@@ogds-controlpanel
(or by directly visiting the@@sync_users
or@@sync_groups
views) - It will automatically be done when setting up a new AdminUnit
- It can be done from the shell by running the
bin/instance sync_ogds
zopectl command (the respective instance must not be running) - For deployments, a cron job that calls
bin/instance0 sync_ogds
should be created that syncs OGDS as needed
Since the OGDS is shared between AdminUnits in the same cluster, the synchronization will only have to be performed on one Zope instance per cluster.
Updating translations can be done with the bin/i18n-build
script.
It will scan the entire opengever.core
package for translation files that
need updating, rebuild the respective .pot
files and sync the .po
files.
Usually you work on a specific package and you want to only rebuild this package:
bin/i18n-build opengever.dossier
For building all packages, use the --all
option:
bin/i18n-build --all
You will need the sass
command for compiling SCSS
to CSS
. Start the
bin/sass-watcher
script and it will pick up changes base on filesystem
events and compile the style files automatically for you.
There is a Gemfile
to help make SASS
versions consistent across
development environments. Please refer to http://bundler.io/ for more details.
The history file is generated automatically from files in the changes
directory using towncrier when making a release with zest.releaser
.
For this you must have installed the zestreleaser.towncrier
plugin.
To preview the generated history file you can run:
towncrier build --draft --version <version-number>
To add a changelog entry, create a file in the changes
directory using the
issue/ticket number as filename and add one of .feature
, .bugfix
,
.other
as extension to signify the change type (e.g. 6968.feature).
The file should just contain the text describing your change followed by your Github username in brackets. Example:
Fix critical bug. [Susanne]
In order to build the Sphinx API docs locally, use the provided
bin/docs-build-public
script:
bin/api-docs-build
This will build the docs (using the html
target by default). If you'd like
to build a different output format, supply it as the fist argument to the
script (e.g. bin/docs-build-public latexpdf
).
If you made changes to any schema interfaces that need to make their way into
the docs, you need to run the bin/instance dump_schemas
script before
running the docs-build-public
script:
bin/instance dump_schemas
This will update the respective schema dumps in docs/schema-dumps/
that
are then used by the docs-build-public
script to render restructured text
schema docs.
Versions are pinned in the file versions.cfg
in the opengever.core
package.
In order to add a new dependency or to update one or many dependencies, follow these steps:
- Append new and changed version pinnings at the end of the
[versions]
section in theversions.cfg
in your localopengever.core
checkout. - Run
bin/cleanup-versions-cfg
, review and confirm the changes. This script removes duplicates and sorts the dependencies. - Commit the changes to your branch and submit it along with other changes as pull request.
For production deployments, the versions.cfg
of a tag can be included
with a raw github url in buildout like this:
[buildout]
extends =
https://raw.githubusercontent.com/4teamwork/opengever.core/2017.4.0/versions.cfg
Scripts are located in /scripts
.
Repository configuration:
convert_csv_repository_to_xlsx.py <https://github.com/4teamwork/opengever.core/blob/master/scripts/convert_csv_repository_to_xlsx.py>: Converts repository configuration from old format (repository.csv) to new format (xlsx).
You have to install openpyxl to run this script!
bin/zopepy scripts/convert_csv_repository_to_xlsx.py <path to repository csv file> <path for new xlsx file>
A script to semi-automatically create policies is provided as bin/create-policy
. The script runs in interactive mode and generates policies based on the questions asked. Policies are stored in the source directory src
.
Policy templates are available from the opengever.policytemplates
package. At the time of writing there is only one policy template for simple SaaS policies.
Once a new policy has been generated the following things need to be added manually:
- an initial repository (as excel file)
- initial template files, if required
- initial sablon templates, if required
- Some more complex confiuration options like retention periods and multiple inboxes/template folders
The fixture objects can be accessed on test-classes subclassing
IntegrationTestCase
with attribute access (self.dossier
).
self.administrator
:nicole.kohler
self.archivist
:archivist
self.committee_responsible
:committee_responsible
self.dossier_manager
:dossier_manager
self.dossier_responsible
:robert.ziegler
self.foreign_contributor
:james.bond
self.limited_admin
:limited_admin
self.manager
:admin
self.meeting_user
:meeting_user
self.member_admin
:member_admin
self.propertysheets_manager
:propertysheets_manager
self.reader_user
:lucklicher.laser
self.records_manager
:records_manager
self.regular_user
:regular_user
self.secretariat_user
:jurgen.konig
self.service_user
:service_user
self.test_user
:test_user_1_
self.webaction_manager
:webaction_manager
self.workspace_admin
:fridolin.hugentobler
self.workspace_guest
:hans.peter
self.workspace_member
:beatrice.schrodinger
self.workspace_owner
:gunther.frohlich
- self.committee_container - self.committee - self.cancelled_meeting - self.decided_meeting - self.decided_proposal - self.meeting - self.period - self.submitted_proposal - self.committee_participant_1 - self.committee_participant_2 - self.committee_president - self.empty_committee - self.inactive_committee_participant - self.inbox_container - self.inbox - self.inbox_document - self.inbox_forwarding - self.inbox_forwarding_document - self.inbox_rk - self.private_root - self.private_folder - self.private_dossier - self.private_document - self.private_mail - self.repository_root - self.branch_repofolder - self.leaf_repofolder - self.cancelled_meeting_dossier - self.closed_meeting_dossier - self.decided_meeting_dossier - self.disposition - self.disposition_with_sip - self.dossier - self.document - self.draft_proposal - self.inbox_task - self.info_task - self.mail_eml - self.mail_msg - self.private_task - self.proposal - self.proposaldocument - self.removed_document - self.ris_proposal - self.sequential_task - self.seq_subtask_1 - self.seq_subtask_2 - self.seq_subtask_3 - self.shadow_document - self.subdossier - self.empty_document - self.subdocument - self.subsubdossier - self.subsubdocument - self.subdossier2 - self.task - self.subtask - self.taskdocument - self.empty_dossier - self.expired_dossier - self.expired_document - self.expired_task - self.inactive_dossier - self.inactive_document - self.inactive_task - self.meeting_dossier - self.meeting_document - self.meeting_task - self.meeting_subtask - self.offered_dossier_for_sip - self.offered_dossier_to_archive - self.offered_document_1 - self.offered_document_2 - self.offered_dossier_to_destroy - self.protected_dossier - self.protected_document - self.protected_dossier_with_task - self.protected_document_with_task - self.task_in_protected_dossier - self.resolvable_dossier - self.resolvable_subdossier - self.resolvable_document - self.empty_repofolder - self.inactive_repofolder - self.templates - self.ad_hoc_agenda_item_template - self.asset_template - self.docprops_template - self.dossiertemplate - self.dossiertemplatedocument - self.subdossiertemplate - self.subdossiertemplatedocument - self.empty_template - self.meeting_template - self.paragraph_template - self.normal_template - self.proposal_template - self.recurring_agenda_item_template - self.sablon_template - self.subtemplates - self.subtemplate - self.tasktemplatefolder - self.tasktemplate - self.workspace_root - self.workspace - self.todo - self.todolist_general - self.todolist_introduction - self.assigned_todo - self.completed_todo - self.workspace_document - self.workspace_folder - self.workspace_folder_document - self.workspace_mail - self.workspace_meeting - self.workspace_meeting_agenda_item
self.committee_id
:1
self.empty_committee_id
:2
Use bin/mtest
for running all test in multiple processes. Alternatively bin/test
runs the tests in sequence.
The multi process script distributes the packages (e.g. opengever.task
, opengever.base
, etc) into multiple processes,
trying to balance the amount of test suites, so that it speeds up the test run.
The bin/mtest
script can be configured with environment variables:
MTEST_PROCESSORS
- The amount of processors used in parallel. It should be no greater than the amount of available CPU cores. Defaults to4
.
We are shifting the tests from the older functional testing layer to the newer integration testing layer.
Integration testing:
- Should be used for new tests!
- Comes with a preinstalled testing fixtures.
- Transactions are disabled for isolation purposes: transaction.commit is not allowed in tests.
- Uses
ftw.testbrowser
'sTraversalDriver
.
Functional testing:
- Should not be used for new tests, when possible.
- Is factory-based, using
ftw.builder
. - Uses transactions.
- Limited / slow database isolation: a fresh setup is necessary for each test.
from ftw.testbrowser import browsing
from ftw.testbrowser.pages import statusmessages
from opengever.testing import IntegrationTestCase
class TestExampleView(IntegrationTestCase):
@browsing
def test_example_view(self, browser):
self.login(self.dossier_responsible, browser)
browser.open(self.dossier, view='example_view')
statusmessages.assert_no_error_messages()
These best practices apply to the new integration testing layer.
Committing the transaction will break isolation. The testing layer will prevent you from interacting with the transaction.
The testing fixtures create content objects, users, groups and client configurations (admin units, org units) which are available for all tests. They can and should be modified to the needs of the test.
Creating objects with ftw.builder
or with ftw.testbrowser
is expensive
because it takes a moment to index the object.
Therefore we want to avoid creating unnecessary objects within the tests
so that the tests are faster overall.
Tests which have the job to test object creation (e.g. through the browser) obviously need to actually create an object, all other tests should try to reuse objects from the fixture and modify them as needed.
The fixture provides a set of standard users which should be used in tests.
Do not use plone.app.testing
's test user with global roles as it does
not reflect properly how the security model of GEVER works.
In order to test features which can only be executed by the system or by a
Manager
-user, the plone.app.testing
's site owner may be used.
Integration tests start with no user logged in. The first thing each test should do, is to log in the user with the fewest privileges required for doing the task under test.
The login command should not be moved to the setUp
method; it should be
clearly visible at the beginning of each test, so that a reader has the necessary
context without scrolling to the top of the file.
When authenticated preparations are required in the setUp
method, use
self.login
as a context manager in order to cleanup the authentication
on exit, so that the tests still start anonymously.
from opengever.testing import IntegrationTestCase
from ftw.testbrowser import browsing
class TestExampleView(IntegrationTestCase):
def setUp(self):
super(TestExampleView, self).setUp()
with self.login(self.administrator):
self.dossier.prepare_for_test()
def test_server_side(self):
self.login(self.dossier_responsible)
self.assertTrue(self.dossier.can_do_important_things())
@browsing
def test_client_side_with_browser(self, browser):
self.login(self.regular_user, browser)
browser.open(self.dossier)
browser.click_on('Do important things')
The statement self.assertIn('The label', browser.contents) will print the complete HTML document as failure message. This is distracting and not useful at all.
Instead you should select specific nodes and do assertions on those nodes, e.g.
from opengever.testing import IntegrationTestCase
from ftw.testbrowser import browsing
class TestExampleView(IntegrationTestCase):
@browsing
def test_label(self, browser):
self.assertEquals('The label',
browser.css('label.foo').first.text)
This allows the browser to help when print a nice error message when the node
was not found:
NoElementFound: Empty result set: browser.css("label.foo") did not match any nodes.
When the view does not return a complete HTML document but, for example, a status
only (OK
), or it is some kind of API endpoint, browser.contents
may be
asserted.
Do not tear down changes which are taken care of by some kind of isolation:
- Do not tear down ZODB changes: the ZODB is isolated by
plone.app.testing
. - Do not tear down SQL changes: we take care of that in the SQL testing layer with savepoints / rollbacks.
- Do not tear down component registry changes (e.g. new adapters, utilities, event handlers) as this is taken care of by the COMPONENT_REGISTRY_ISOLATION layer.
- Do tear down modifications in environment variables (
os.environ
). - Do tear down modifications stored in module globals (e.g. transmogrifier sections).
When your test expects a specific state in order to work properly, this state should be ensured by using guard assertions.
def test_closing_dossier(self):
self.assertTrue(self.dossier.is_open(),
'Precondition: assumed dossier to be open')
self.dossier.close()
self.assertFalse(self.dossier.is_open())
If the self.dossier
is changed to be not open by default anymore, the failure
should tell us that a precondition was no longer met rather than implying that
the close()
method is broken.
The statement also acts as "given"-statement and a reader can easily figure out
what the precondition is, because it is visually separated.
Alternatively a precondition can be ensured by setting the state of the object:
def test_title_is_journalized_on_action(self):
self.dossier.title = u'The dossier'
action(self.dossier)
self.assertEquals(u'The dossier',
last_journal_entry(self.dossier).title)
Feature flags can by activated test-case-wide by setting a tuple of all required flags:
class TestDossierTemplate(IntegrationTestCase):
features = ('dossiertemplate',)
When a feature should not be activated test-case-wide it can be activated within a single test:
class TestTemplates(IntegrationTestCase):
def test_adding_dossier_template(self):
self.activate_feature('meeting')
See the list of feature flags.
When developing opengever.core
, a developer often runs a single test module,
with bin/test -m opengever.dossier.tests.test_activate
for instance.
This will set up a complete fixture each time.
In order to speed up the feedback loop when developing,
we try to cache the database after setting up the fixture.
This will speed up the test runs, but it also makes the result inaccurate:
if the cachekeys do not detect a relevant change, we may not realize
that something breaks.
Because the results are not accurate and this is an experiment, the feature is considered experimental and therefore disabled by default.
You can enable the feature by setting an environment variable:
GEVER_CACHE_TEST_DB=true bin/test -m opengever.dossier.tests.test_activate
There is also a binary which does that for you for just one run for convenience:
bin/test-cached -m opengever.dossier.tests.test_activate
You can manually remove / rebuild the caches:
./bin/remove-test-cache
This feature is disabled on the CI server.
When the environment variable GEVER_CACHE_VERBOSE
is set to true
,
a list of modified files will be printed whenever a cachekey is invalidated.
This can be useful to debug problems with the fixture cache:
GEVER_CACHE_VERBOSE=true bin/test-cached -m opengever.dossier.tests.test_activate
This project uses the ftw.builder package based on the Builder pattern to create test data. The opengever specific builders are located in opengever.testing
To use the Builder API you need to import the Builder
function:
from ftw.builder import Builder
from ftw.builder import create
Then you can use the Builder
function in your test cases:
dossier = create(Builder("dossier"))
task = create(Builder("task").within(dossier))
document = create(Builder("document")
.within(dossier)
.attach_file_containing("test_data"))
Note that when using the OPENGEVER_FUNCTIONAL_TESTING
Layer the Builder
will automatically do a transaction.commit()
when create()
is called.
opengever.core has some unit tests (without a testing layer) and some mock test cases (usually
with the COMPONENT_UNIT_TESTING
testing layer).
When writing unit tests (with no layer), the developer must take into account that there is no isolation at all. The developer must make sure that neither the test nor any component used in the test leaks, or isolation must be ensured manually. The developer should also take into account that components under tests (or their dependencies) may be changed in the future.
By leaking we mean any kind of thing changed outside of the test scope. This includes registering
components (adapters, utilites), changing globals (setSite
, registering transmogrifier
blueprints, environment variables) or any other action that can influence other components later.
If a developer cannot guarantee that the test is not leaking he/she shall not write a unit test,
but use at least the COMPONENT_UNIT_TESTING
layer or write an integration test.
The COMPONENT_UNIT_TESTING
provides a minimal isolation of z3 componentes (adapters,
utilites) and registers basic adapters such as annotations.
When using mock tests cases, which discourage from in general, always import the
MockTestCase
from ftw.testing
in order to be compatible with COMPONENT_UNIT_TESTING
.
GEVER provides a testserver which sets up a GEVER in testing mode with a real HTTP server so that other applications and components can be tested. The testserver installs the standard GEVER testing fixture. By telling the server when to setup and teardown for each test it makes sure that the database is isolated and rolled back properly for each test.
In order to run the testserver, a local Development installation needs to be installed.
Once installed properly, the server can be started with bin/testserver
:
./bin/testserver -v Plone: http://localhost:55001/plone XMLRPC: http://localhost:55002 ... 18:13:39 [ ready ] Started Zope 2 server
Use the -v flag in order to make errors and exceptions appear on stderr.
Next you need to tell the testserver that you will now run a test:
./bin/testserverctl zodb_setup
Then you can make requests to http://localhost:55001/plone
and use all the content and users generated by the fixture.
It will be the exact same each run. The administrator login is admin
and secret
.
Once your test is finished you should tear down and re-setup for the next test in order to isolate the database properly:
./bin/testserverctl zodb_teardown ./bin/testserverctl zodb_setup
The testserver sets up a service.user
which has a REST-API service key and is allowed to impersonate other users.
This is important for testing applications which use the REST-API.
The service key can be downloaded
here <https://github.com/4teamwork/opengever.core/blob/master/opengever/testing/assets/service_user_generic.private.json>.
The ports used by the testserver can easily be changed through environment variables:
ZSERVER_PORT
- the port of the GEVER http server (default:55001
)TESTSERVER_CTL_PORT
- the port of the XMLRPC control server (default:55002
).SOLR_PORT
- the port of the Solr server which is controlled by the testserver (default:55003
).TESTSERVER_REUSE_RUNNING_SOLR
- reuse the solr on the given port (default:None
).
A custom fixture can be loaded in the testserver. This is helpful when other projects are testing GEVER integration and need specific content. The custom fixture can be defined with an environment variable:
FIXTURE=~/projects/myproject/gever/fixture.py ./bin/testserver
The fixture will be loaded into the testserver process with the dottedname
customfixture.fixture
; the package name is always customfixture
.
It is possible to import local files of this folder with import .otherfile
.
Example fixture:
from opengever.testing.fixtures import OpengeverContentFixture class Fixture(OpengeverContentFixture): def __init__(self): super(Fixture, self).__init__() with self.freeze_at_hour(20): self.create_my_custom_content()
The fixture class name defaults to Fixture
and can be changed with the environment
variable FIXTURE_CLASS
.
When developing third party applications, it is best practice to use a tape recording system. In local development, a real testserver should be started and tapes of its responses should be recorded. Those tapes should be committed to GIT so that no GEVER needs to be installed when running the tests on the CI - it will simply pull the tapes.
Whenever the application needs to support a new version of GEVER, a developer records all tapes when running a new version of the testserver, so that compatibility with the new version can be proven.
The connection from the testserverctl
to the XMLRPC-Server can be tested with bin/testserverctl connectiontest
.
This will result in a "Connection refused" error as long as the testserver is starting and will do nothing when the server is ready for the first isolate
or zodb_setup
.
This can be used as docker healthcheck.
You can run testserver with docker-compose: docker-compose up testserver
.
See the testerver docker readme.
If you are using the testserver in another project and want to have a docker-compose file there,
see the docker-compose.testserver.yml
file for a minimal working example.
It contains commented example on how insert your custom fixture as volume.
For easy testing of inbound mail (without actually going through an MTA) there's
a script bin/test-inbound-mail
that can be used to test creation of inbound
mail:
cat testmail.eml | bin/test-inbound-mail
The script assumes you got an instance running on port ${instance:http-address}
, a GEVER client called fd
and an omelette with ftw.mail
in it installed. It will then feed the mail from stdin to
the ftw.mail
inbound view, like Postfix would.
The following section describes some aspects of deploying OneGov GEVER. If you need an example of a simple deployment profile have a look at the examplecontent profiles, see: https://github.com/4teamwork/opengever.core/tree/master/opengever/examplecontent.
The manage_main view of the Zope app contains an additional button "Install OneGov GEVER" to add a new deployment. It leads to the setup wizard where a deployment profile and an LDAP configuration profile can be selected.
The setup wizard can be configured with the following environment variable:
IS_DEVELOPMENT_MODE
- If set pre-selects the following options in the setup wizard: Import of LDAP users, Development Mode and Purge SQL. Currently these are all available options.
Deployment profiles can be selected in the setup wizard. They are used to link a Plone site with its corresponding AdminUnit
and they usually include a policy profile, additional init profiles and further Plone-Site configuration options. Deployment profiles are configured in ZCML:
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:opengever="http://namespaces.zope.org/opengever"
i18n_domain="my.package">
<opengever:registerDeployment
title="Development with examplecontent"
policy_profile="opengever.examplecontent:default"
additional_profiles="opengever.setup:repository_root,
opengever.setup:default_content,
opengever.examplecontent:init"
admin_unit_id="admin1"
/>
</configure>
See https://github.com/4teamwork/opengever.core/blob/master/opengever/setup/meta.py for a list of all possible options.
LDAP profiles can be selected in the setup wizard. They are used to install an LDAP configuration profile. LDAP profiles are configured in ZCML:
<configure
xmlns="http://namespaces.zope.org/zope"
xmlns:opengever="http://namespaces.zope.org/opengever"
i18n_domain="my.package">
<opengever:registerLDAP
title="4teamwork LDAP"
ldap_profile="opengever.examplecontent:4teamwork-ldap"
/>
</configure>
See https://github.com/4teamwork/opengever.core/blob/master/opengever/setup/meta.py for a list of all possible options.
For policyless deployments, the Plone site can be created with a stock profile, and most settings and content will be set up in a second step, via the import of a Bundle with a configuration.json
.
Select "Policyless Deployment" and "Policyless LDAP" on the setup screen to create a minimal policyless Plone site. OGDS sync will not be performed yet during Plone site creation, since LDAP settings will be imported later.
Then, using the @@import-bundle
view, import a Bundle containing the appropriate content as well as a configuration.json
.
Example for a configuration.json
:
{
"units": {
"admin_units": [
{
"unit_id": "musterstadt",
"title": "Musterstadt",
"ip_address": "127.0.0.1",
"site_url": "http://localhost:8080/ogsite",
"public_url": "http://localhost:8080/ogsite",
"abbreviation": "MS"
}
],
"org_units": [
{
"unit_id": "musterstadt",
"title": "Musterstadt",
"admin_unit_id": "musterstadt",
"users_group_name": "users_group",
"inbox_group_name": "inbox_group"
}
]
},
"registry": {
"opengever.workspace.interfaces.IWorkspaceSettings.is_feature_enabled": true
}
}
Because in policyless deployments no LDAP Plugin will be installed, the ogds-sync
service is required to get a functioning OGDS. For local development, the ogds-sync
service can be installed as follows:
cd ~/src
git clone [email protected]:4teamwork/ogds-sync.git
cd ogds-sync
cp .env.sample .env
vim .env
You can use the following as a base for an .env file for local development (substituting the LDAP_BIND_PASSWORD from your local ~/.opengever/ldap/ldap.4teamwork.ch.json
):
OGDS_DSN=postgresql://<your-macos-username>@host.docker.internal/<ogds-db-name>
LDAP_PROFILE=DS389
LDAP_URL=ldaps://ldap.4teamwork.ch
LDAP_BIND_DN=cn=OGAdmin,ou=OneGovGEVER,dc=4teamwork,dc=ch
LDAP_BIND_PASSWORD='REPLACEME'
LDAP_USER_BASE_DN=ou=Users,ou=Dev,ou=OneGovGEVER,dc=4teamwork,dc=ch
LDAP_GROUP_BASE_DN=ou=Groups,ou=Dev,ou=OneGovGEVER,dc=4teamwork,dc=ch
After a policyless GEVER Plone site has been set up, and the SQL tables created, you can perform the initial sync:
docker compose run --rm ogds-sync ogds-sync
And then start the ogds-sync service:
docker compose up
In order for authentication to work on a local policyless deployment, you need to configure a CAS portal. You can run one locally, or just configure the DEV one:
- http://localhost:8080/ogsite/acl_users/cas_auth/manage_config
- Set
https://dev.onegovgever.ch/portal/cas
for the CAS Server URL
This plugin serves as a replacement for the LDAP/AD PAS plugins to enumerate users and groups from OGDS instead of LDAP. Because it's still experimental, it's not installed by default, expect for policyless deployments. In order to install it, and have it function as intended, the following needs to be done:
- Make sure a plugin is present that can perform authentication (e.g.
cas_auth
) - Add an instance of "OGDS Authentication Plugin" in ZMI
- In the "Cache" tab of the plugin, associate it with "RAMCache"
- In the "Activate" tab of the plugin, enable all its capabilities
- Move the OGDS plugin to the top of the list for properties plugins (acl_users -> plugins -> Properties Plugins -> move
ogds_auth
to the top) - Disable all of the LDAP plugin's capabilities
The plugin does not perform authentication itself. It therefore requires another IAuthenticationPlugin
to be present, activated and capable to authenticate users for the given deployment.
For programmatic installation during setup, the install_ogds_auth_plugin
helper function in opengever.ogds.auth.plugin
may be used to perform the steps listed above.
Opengever defines four additional generic setup setuphandlers to create initial AdminUnit and OrgUnit OGDS entries, create initial documents/document templates, configure local roles and create an initial repository. Of course ftw.inflator
content creation is available as well, for details see https://github.com/4teamwork/ftw.inflator.
Add a unit_creation
folder to your generic setup profile. To that folder add the files admin_units.json
and/or org_units.json
. The content is created when the generic setup profile is applied. Note also that this content is created before ftw.inflator
content and before all the other custom gever content creation handlers.
AdminUnit example:
[
{
"unit_id": "admin1",
"title": "Admin Unit 1",
"ip_address": "127.0.0.1",
"site_url": "http://localhost:8080/admin1",
"public_url": "http://localhost:8080/admin1",
"abbreviation": "A1"
}
]
OrgUnit example:
[
{
"unit_id": "org1",
"title": "Org Unit 1",
"admin_unit_id": "admin1",
"users_group_id": "og_demo-ftw_users",
"inbox_group_id": "og_demo-ftw_users"
}
]
Gever repositories are initialized from an excel file. To add initial repository setup add a folder opengever_repositories
to your generic setup profile. Each *.xlsx
file in that folder will then be processed, the filename will serve as the ID for the repository root. See ordnungssystem.xlsx for an example. Note that this setuphandler is called after ftw.inflator but before custom GEVER content.
Documents and Document templates are created with a customized ftw.inflator
pipeline since they need special handling to have correct initial file versions. Thus documents should never be created with ftw.inflator
but always with our customized pipeline. Since the custom pipeline is based on ftw.inflator
we suggest to create all gever-content with this new pipeline.
To create content add an opengever_content
folder to your generic setup profile. All JSON files in this folder are then processed similar to ftw.inflator
. Note that this setuphandler is called after ftw.inflator.
To decouple local role assignment from content creation opengever introduces a separate setuphandler to configure local roles. To configure local roles add a local_role_configuration
folder to your generic setup profile. All JSON files in that folder are then processed. Note that this setuphandler is called after ftw.inflator.
Example configuration:
[
{
"_path": "ordnungssystem",
"_ac_local_roles": {
"og_demo-ftw_users": [
"Contributor",
"Editor",
"Reader"
]
}
}
]
Gever offers a whole infrastructure to execute certain jobs overnight, to avoid excessive load of the instances during working hours. Nightly jobs are executed via a cronjob calling the NightlyJobRunner
, which will try to execute all jobs provided by the registered nightly job providers (named multiadapters of INightlyJobProvider). The nightly-jobs-stats
view provides information about the status of the nightly job queue.
We offer a high level API to create nightly maintenance jobs for reindexing operations, which can be used in upgrade steps:
query = {'object_provides': IDexterityContent.__identifier__}
with NightlyIndexer(idxs=["sortable_reference"],
index_in_solr_only=True) as indexer:
for obj in self.objects(query, 'Index sortable_reference in Solr'):
indexer.add_by_obj(obj)
This will register the corresponding jobs to the NightlyMaintenanceJobsProvider
.
Errors, especially client errors, are a normal part of the API. ZPublisher's HTTPResponse
will set the proper error codes while Plone.rest
will serialize these errors back to the client. This allows to simply raise errors such as BadRequest
in the API Services, and the rest will happen automatically. This does not prevent the error from being raised and therefore be handled by ftw.raven
and logged to sentry. Specific exceptions that we know will happen in normal Gever operations should not be reported to sentry, which can be easily achieved by raising an exception inheriting from NotReportedException
.