This repository contains the configuration and deployment scripts to deploy Cumulus Core for a DAAC. It is a modified version of the Cumulus Template Deploy project. Specifically, all parts of the deployment have been Terraformed and the configuration minimized by using outputs from other modules and lookups using Terraform AWS provider data sources.
See the Cumulus Documentation for detailed information about configuring, deploying, and running Cumulus.
- Docker
- One or more NGAP accounts (sandbox, SIT, ...)
- AWS credentials for those account(s)
You can run tests inside of a Docker container:
$ make image
$ make container-shell
-
To run linter (flake8) & unit tests (pytest) once:
$ make test
-
To run linter & tests when source files change:
$ make test-watch
The repository is organized into three Terraform modules:
daac
: Creates DAAC-specific resources necessary for running Cumuluscumulus
: Creates all runtime Cumulus resources that can then be used to run ingest workflows.workflows
: Creates a Cumulus workflow with a sample Python lambda.
To customize the deployment for your DAAC, you will need to update variables and settings in a few of the modules. Specifically:
To change which version of the Cumulus Message
Adapter is used to
create the Lambda layer used by all Step Function Tasks, modify the
corresponding variable in the daac/terraform.tfvars
file.
This module contains the bulk of the DAAC-specific settings. There are three specific things you should customize:
-
cumulus/terraform.tfvars
: Variables which are likely the same in all environments (SIT, UAT, PROD) and which are not 'secrets'. -
cumulus/variables/*.tfvars
: Each file contains variables specific to the corresponding 'maturity' or environment to which you are deploying. For example, indev.tfvars
you will likely use a pre-productionurs_url
, while in theprod.tfvars
file you will specify the production url. -
cumulus/secrets/*.tfvars
: Like the variables above, these files contains secrets which are specific to the 'maturity' or environment to which you are deploying. Create one file for each environment and populate it with secrets. See the example file in this directory for a starting point. For example, yourdev
urs_client_password
is likely (hopefully!) different than yourprod
password.
Important Note: The secrets files will not (and should not) be
committed to git. The .gitignore
file will ignore them by default.
DAAC-specific workflows, lambdas, and configuration will be deployed by this module. Most workflow development work will be done here.
See CIRRUS-core README.
There is a sample Workflow Terraform module in the workflows
directory. It deploys a NOP
(No Operation) lambda and workflow. You
can use this as a base for deploying your own workflows. It includes a
Python lambda with unit tests. You can run the tests as shown above.
There is a dashboard
make target which will build and deploy a version of a
Cumulus dashboard to a bucket named $DEPLOY_NAME-cumulus-$MATURITY-dashboard
assuming you created such a bucket during your deployment.
The dashboard build process happens within a Docker container, therefore the
make dashboard
target cannot be envoked within a Docker container. Additionally,
since the final step copies data to your dashboard bucket, you need to run
source env.sh <profile-name> <deploy-name> <maturity>
to set up your AWS
environment prior to running the build process
You need to pass in:
CUMULUS_API_ROOT="your api root"
CUMULUS_DASHBOARD_VERSION="version-of-dashboar"
DEPLOY_NAME=your deploy name
MATURITY=dev
SERVED_BY_CUMULUS_API=true (optional defaults to true)
Example - to build a dashboard which is not served via the Cumulus API
$ CUMULUS_API_ROOT="https://xxx.execute-api.us-west-2.amazonaws.com:8000/dev" \
CUMULUS_DASHBOARD_VERSION="v1.8.0" \
DEPLOY_NAME=kb \
MATURITY=dev \
SERVED_BY_CUMULUS_API= \
make dashboard