CEPH-CI is a framework tightly coupled with CentralCI and Redhat Builds for testing Ceph downstream builds with CentralCI and Jenkins.
It uses a modified version of Mita to create/destroy Ceph resources dynamically
- Python 3.7
It is recommended that you use a python virtual environment to install the necessary dependencies and execute cephci.
- Setup a python 3.7 virtual environment. This is actually quite easy to do now.
python3.7 -m venv <path/to/venv>
source <path/to/venv>/bin/activate
- Install requirements with
pip install -r requirements.txt
Configure your cephci.yaml file:
This file is used to allow configuration around a number of things within cephci.
The template can be found at the top level of the repository, cephci.yaml.template
.
The required keys are in the template. Values are placeholders and should be replaced by legitimate values.
Values for report portal or polarion are only required if you plan on posting to that particular service.
Move a copy of the template to your user directory and edit it from there with the proper values.
cp cephci.yaml.template ~/.cephci.yaml
CentralCI auth files are kept in the osp
directory.
The osp-cred-ci-2.yaml
file has OpenStack credentials details to create/destroy resources.
For local cephci runs, you will want to replace the username/password and domain,tenant-domain-id: with
your own OpenStack credentials.
Cluster configuration files are kept in a directory under conf
for each ceph version.
For jewel, configs are under conf/jewel
.
For luminous, configs are under conf/luminous
.
For nautilus, configs are under conf/nautilus
The conf files describes the test bed configuration. The image-name inside globals: define what image is used to clone ceph-nodes( mon, osd, mds etc), The role maps to ceph role that the node will take and osd generally attach 3 additional volumes with disk-size as specified in config.
Inventory files are kept under conf/inventory
,
and are used to specify which operating system to be used for the cluster resources.
All test suite configurations are found inside the suites
directory.
There are various suites that are mapped to versions of Ceph under test
suites/jewel/ansible/sanity_ceph_ansible will be valid for 2.0 builds
suites/luminous/ansible/sanity_ceph_ansible will be valid for 3.0 builds
The tests inside the suites are described in yaml format
tests:
- test:
name: ceph deploy
module: test_ceph_deploy.py
config:
base_url: 'http://download-node-02.eng.bos.redhat.com/rcm-guest/ceph-drops/auto/ceph-1.3-rhel-7-compose/RHCEPH-1.3-RHEL-7-20161010.t.0/'
installer_url:
desc: test cluster setup using ceph-deploy
destroy-cluster: False
abort-on-fail: True
- test:
name: rados workunit
module: test_workunit.py
config:
test_name: rados/test_python.sh
branch: hammer
desc: Test rados python api
The above snippet describes two tests and the module is the name of the python
script that is executed to verify the test, every module can take a config
dict that is passed to it from the run wrapper, The run wrapper executes
the tests serially found in the suites. The test scripts are location in
the tests
folder.
run.py
is the main script for ceph-ci. You can view the full usage details by passing in the --help
argument.
python run.py --help
There are a few arguments that are required for cephci execution:
--rhbuild <build_version>
--osp-cred <cred_file>
--global-conf <conf_file>
--inventory <inventory_file>
--suite <suite_file>
Some non-required arguments that we end up using a lot:
--log level <level>
- set the log level that is output to stdout.
Ceph ansible install suite:
python run.py --rhbuild 3.3 --global-conf conf/luminous/ansible/sanity-ansible-lvm.yaml --osp-cred osp/osp-cred-ci-2.yaml
--inventory conf/inventory/rhel-7.7-server-x86_64.yaml --suite suites/luminous/ansible/sanity_ceph_ansible_lvm.yaml
--log-level info
Upgrade suite:
python run.py --rhbuild 3.2 --global-conf conf/luminous/upgrades/upgrade.yaml --osp-cred osp/osp-cred-ci-2.yaml
--inventory conf/inventory/rhel-7.6-server-x86_64-released.yaml --suite suites/luminous/upgrades/upgrades.yaml
--log-level info
Containerized upgrade suite:
python run.py --rhbuild 3.2 --global-conf conf/luminous/upgrades/upgrade.yaml --osp-cred osp/osp-cred-ci-2.yaml
--inventory conf/inventory/rhel-7.6-server-x86_64-released.yaml --suite suites/luminous/upgrades/upgrades_containerized.yaml
--log-level info --ignore-latest-container --insecure-registry --skip-version-compare
Ceph-CI also has the ability to manually clean up cluster nodes if anything was left behind during a test run. All you need to provide is your osp credentials and the instances name for the cluster. Don't use subset naming for custom instances name.eg: --instances-name vp and --instances-name vpoliset at same time.
python run.py --osp-cred <cred_file> --cleanup <instances_name>
In order to post results properly or receive results emails you must first configure your ~/.cephci.yaml
file.
Please see the Initial Setup section of the readme if you haven't done that.
Results are posted to polarion if the --post-results
argument is passed to run.py
.
When this argument is used, any tests that have a polarion-id
configured in the suite
will have it's result posted to polarion.
Results are posted to report portal if the --report-portal
argument is passed to run.py
.
A result email is automatically sent to the address configured in your ~/.cephci.yaml
file.
In addition to personally configured emails, if the --post-results
or --report-portal
arguments are
passed to run.py
an email will also be sent to [email protected]
.