Example app using the Doge 🐕 workflow for continuous integration/deployment (CI/CD) to Digital Ocean.
- GNU Make
- terraform
- git >=2.28.0
- pre-commit
Optional:
- GitHub CLI (if you want to create the GitHub repository from the terminal).
You can install all the software requirements using conda (or mamba) and the environment.yaml
file provided in the root of the repository as follows:
conda env create -f environment.yaml
# and then work from the newly-created environment as in:
conda activate doge
- A DigitalOcean account. You can sign up using my referral link to get $100 in credit.
- A GitHub account.
- A Terraform Cloud account and a Terraform Cloud organization. With an active account, you can create an organization by navigating to app.terraform.io/app/organizations/new. You can also use an existing organization. This workflow is compatible with the free plan.
ACHTUNG
The Doge 🐕 workflow requires three access tokens, which must be set as terraform variables in the terraform/deploy/meta/vars.tfvars
file (note that to avoid disclosing sensitive information, this file is kept out of version control):
- DigitalOcean: navigate to cloud.digitalocean.com/account/api/token/new (you must be authenticated), choose a name and an expiration, click on "Generate Token" and copy the generated token as the value of the
do_token
variable. - GitHub: navigate to github.com/settings/tokens/new (you must be authenticated), choose a name, an expiration and select at least the
repo
andworkflow
permissions. Click on "Generate token" and copy the generated token as the value of thegh_token
variable. - Terraform Cloud: navigate to app.terraform.io/app/settings/tokens and click on "Create an API token", provide a description, click on "Create API token" and copy the generated token as the value of the
tf_api_token
variable.
The initial infrastructure provisioning in the Doge workflow is done by running Terraform locally with the help of GNU Make. This will set up the required GitHub infrastructure (notably repository secrets) so that the rest of the workflow is fully managed by GitHub Actions.
From the root of the generated project, use the following command to provision the meta workspace (i.e., a workspace to manage workspaces1, 2):
make init-meta
At this point, if you navigate to app.terraform.io/app/exaf-epfl/workspaces, a workspace named acl-tutor-do-meta
should appear.
You can then plan and apply the Terraform setup as follows:
make plan-meta
make apply-meta
which will create three additional workspaces, named acl-tutor-do-base
, acl-tutor-do-stage
and acl-tutor-do-prod
.
The GitHub repository can be created in two ways:
-
using the GitHub CLI (recommended): first, make sure that you are properly authenticated with the GitHub CLI (use the
gh auth login
command). Then, from the root of the generated project, runmake create-repo
, which will automatically initialize a git repository locally, add the first commit, and push it to a GitHub repository atmartibosch/acl-tutor-do
. -
manually from the GitHub web interface: navigate to github.com/new, create a new empty repository at
martibosch/acl-tutor-do
. Then, from the root of the generated project, initialize a git repository, setup pre-commit for the repository, add the first commit and push it to the new GitHub repository as follows:git init --initial-branch=main # this only works for git >= 2.28.0 pre-commit install git add . SKIP=terraform_validate git commit -m "first commit" git branch -M main git remote add origin [email protected]:martibosch/acl-tutor-do git push -u origin main
Once the initial commit has been pushed to GitHub, use GNU Make to provision some base infrastructure:
make init-base
make plan-base
make apply-base
notably, a ssh key will be created and added to terraform, DigitalOcean (you can see a new item named acl-tutor-do
at cloud.digitalocean.com/account/security, and repository secrets (you can see a repository secret named SSH_KEY
at github.com/martibosch/acl-tutor-do/settings/secrets/actions). Additionally, a DigitalOcean project (an item named acl-tutor-do
visible in the top-left "PROJECTS" menu of the web interface) will be created to group the resources used for this app.
The inital provisioning of the staging and production infrastructure must also be done using GNU Make following the Terraform init-plan-apply scheme, i.e., for the staging environment:
make init-stage
make plan-stage
make apply-stage
and for production:
make init-prod
make plan-prod
make apply-prod
If you navigate to cloud.digitalocean.com and select the acl-tutor-do
project, you will see that droplets named acl-tutor-do-stage
and acl-tutor-do-prod
have been created for each environment respectively. Additionally, at github.com/martibosch/acl-tutor-do/settings/secrets/actions), you will find an environment secret named DROPLET_HOST
, which contains the IPv4 address of the staging and production hosts respectively.
Once the initial infrastructure has been provisioned, CI/CD is ensured by the following GitOps workflow:
- New features are pushed into a dedicated feature branch.
- develop: a pull request (PR) to the
develop
branch is created, at which point CI workflow is run. If the CI workflow passes, the PR is merged, otherwise, fixes are provided in the feature branch until the CI workflow passes. - stage: once one or more feature PR are merged into the
develop
branch, they can be deployed to the staging environment by creating a PR to thestage
branch, which will trigger the "plan" workflow. If successful, the PR is merged, at which point the "deploy" workflow is run, which will deploy the branch contents to the staging environment. - main: after a successful deployment to staging, a PR from the stage to the main branch will trigger the "plan" workflow, yet this time for the production environment. Likewise, If the workflow passes, the PR can be merged, which will trigger the "deploy" workflow, which will deploy the branch contents to production.
Overall, the Doge 🐕 GitOps workflow can be represented as follows:
gitGraph:
commit id:"some commit"
branch stage
branch develop
branch some-feature
checkout some-feature
commit id:"add feature"
checkout develop
merge some-feature tag:"CI (lint, build)"
checkout stage
merge develop tag:"deploy stage"
checkout main
merge stage tag:"deploy prod"
The infrastructure provisioned by this setup can be destroyed using GNU Make as follows:
make destroy-prod
make destroy-stage
make destroy-base
make destroy-meta
The overall idea is:
- Terraform provides the infrastructure for each environment and runs the one-time tutor commands via cloud-init
- The build workflow uses GitHub Actions to build and push a Docker image to the GitHub container registry
- The deploy workflow uses the pushed image and deploys it to the droplet
The GitHub workflows of steps 2 and 3 are triggered manually. Currently, the build workflow serves only to upgrade versions and/or to change the theme, whereas the deploy workflow serves to deploy more recent images built by the build workflow as well as to update some tutor settings. Ideally, the overall setup should move towards a fully GitOps declarative approach where the required parts build and deploy workflows are triggered to match changes in configuration files.
TODO: fix tutor config save + tutor init in cloud-init.yaml
TODO: pip install tutor-mfe
in cloud-init?
mkdir -p "$(tutor plugins printroot)"
mv plugins/{plugin}.py "$(tutor plugins printroot)"
# tutor plugins list (to see that the plugin `{plugin}` appears)
tutor plugins enable {plugin}
tutor config save
tutor local restart
To avoid a No such file or directory: ‘TMPDIR=tmp
error (see https://discuss.overhang.io/t/errno-2-no-such-file-or-directory-tmpdir-tmp/1877/6), we install:
https://github.com/eduNEXT/tutor-contrib-codejail
This is the current approach. We use terraform to provide the infrastructure and then deploy tutor by running commands in the server via ssh. Note that {env}
can be either stage
or prod
.
This is quite straight-forward and follows the terraform init, plan and apply scheme:
make init-{env}
# make plan-{env}
make apply-{env}
At this point, a series of commands to install docker, tutor and other requirements will run via cloud-init in the created droplet.
First of all, ssh into the server:
make ssh-{env}
The reminder assumes that the commands are run from the server (via ssh). While the server will be ready as soon as make apply-{env}
finishes, the initial commands running via cloud-init can take a while to complete. You can follow its status by running
sudo tail /var/log/cloud-init-output.log
Follow the steps at https://discuss.overhang.io/t/howto-enable-multiple-languages-for-your-open-edx-platform/140
Usually we work with a custom openedx image built via GitHub Actions that should include our custom theme at github.com/African-Cities-Lab/acl-indigo-theme.git, but we still need to figure out how it works exactly. TODO: try
tutor config save --set DOCKER_IMAGE_OPENEDX=ghcr.io/martibosch/openedx:{tag}
git clone -b develop https://github.com/African-Cities-Lab/acl-indigo-theme.git \
"$(tutor config printroot)/env/build/openedx/themes"
tutor images build openedx
tutor images push openedx
tutor local do settheme acl-indigo-theme
tutor local launch -I
tutor config save --set SMTP_HOST=in-v3.mailjet.com --set SMTP_USERNAME={smtp-username} --set SMTP_PASSWORD={smtp-password} --set SMTP_PORT=587 --set SMTP_USE_SSL=false --set SMTP_USE_TLS=true
TODO: improve version-control of plugins and add some sort of requirements.txt
Note that the customized MFE brand is handled by the custommfebrand
plugin, which installs the brand-openedx
npm package from github.com/African-Cities-Lab/brand-openedx.git.
TODO: version-control customized translation translations
mkdir .local/share/tutor/env/plugins/mfe/build/mfe/i18n/authn
nano .local/share/tutor/env/plugins/mfe/build/mfe/i18n/authn/fr.json
tutor local do createuser --staff --superuser yourusername [email protected]
1. "Managing Workspaces With the TFE Provider at Scale Factory"
2. response by chrisarcand in "Using variables with remote backend"