Skip to content

Commit

Permalink
[ETL-616] Implement Great Expectations to run on parquet data (#139)
Browse files Browse the repository at this point in the history
* initial commit for testing

* update sample expectations

* add two data types

* correct to fitbitdailydata

* fix expectation

* add complete script

* initial cf config and template

* correct formatting, refactor triggers

* fix job name

* refactor gx code, add tests, adjust gx version

* refactor gx code, add tests, adjust gx version

* make consistent naming

* remove hardcoded args

* add integration tests, remove null rows code, add dep for urllib3<2

* change to lowercase data type

* add prod cf configs, add perm for glue role for shareable artifacts bucket

* rename, include prod ver

* add test to catch exception

* add conditional creation of triggers due to what is available in expectations json

* update README for tests, add in testing for our scripts

* chain cmd together

* update prod

* gather tests, correct key_prefix to key, add missing params to prod glue role

* remove slash

* add gx glue version as var in config
  • Loading branch information
rxu17 authored Sep 13, 2024
1 parent 26e2ad8 commit 30d1873
Show file tree
Hide file tree
Showing 17 changed files with 1,084 additions and 20 deletions.
35 changes: 29 additions & 6 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# recover github workflows

## Overview

Recover ETL has four github workflows:

- workflows/upload-and-deploy.yaml
Expand All @@ -14,35 +16,56 @@ Recover ETL has four github workflows:
| codeql-analysis | on-push from feature branch, feature branch merged into main |
| cleanup | feature branch deleted |


## upload-files

Copies pilot data sets from ingestion bucket to input data bucket for use in integration test. Note that this behavior assumes that there are files in the ingestion bucket. Could add updates to make this robust and throw an error if the ingestion bucket path is empty.

## upload-and-deploy

Here are some more detailed descriptions and troubleshooting tips for some jobs within each workflow:

### upload-files

Copies pilot data sets from ingestion bucket to input data bucket for use in integration test. Note that this behavior assumes that there are files in the ingestion bucket. Could add updates to make this robust and throw an error if the ingestion bucket path is empty.
### Current Testing Related Jobs

### nonglue-unit-tests
#### nonglue-unit-tests

See [testing](/tests/README.md) for more info on the background behind these tests. Here, both the `recover-dev-input-data` and `recover-dev-processed-data` buckets' synapse folders are tested for STS access every time something is pushed to the feature branch and when the feature branch is merged to main.

This is like an integration test and because it depends on connection to Synapse, sometimes the connection will be stalled, broken, etc. Usually this test will only take 1 min or less. Sometimes just re-running this job will do the trick.

### pytest-docker
#### pytest-docker

This sets up and uploads the two docker images to ECR repository.
**Note: A ECR repo called `pytest` would need to exist in the AWS account we are pushing docker images to prior to running this GH action.**

Some behavioral aspects to note - there were limitations with the matrix method in Github action jobs thus had to unmask account id to pass it as an output for `glue-unit-tests` to use. The matrix method at this time [see issue thread](https://github.com/orgs/community/discussions/17245) doesn't support dynamic job outputs and the workaround seemed more complex to implement, thus we weren't able to pass the path of the uploaded docker container directly and had to use a static output. This leads us to use `steps.login-ecr.outputs.registry` which contains account id directly so the output could be passed and the docker container could be found and used.

### glue-unit-tests
#### glue-unit-tests

See [testing](/tests/README.md) for more info on the background behind these tests.

For the JSON to Parquet tests sometimes there may be a scenario where a github workflow gets stopped early due to an issue/gets canceled.

With the current way when the `test_json_to_parquet.py` run, sometimes the glue table, glue crawler role and other resources may have been created already for the given branch (and didn’t get deleted because the test didn’t run all the way through) and will error out when the github workflow gets triggered again because it hits the `AlreadyExistsException`. This is currently resolved manually by deleting the resource(s) that has been created in the AWS account and re-running the github jobs that failed.

### Adding Test Commands to Github Workflow Jobs

After developing and running tests locally, you need to ensure the tests are run in the CI pipeline. To add your tests to under the `upload-and-deploy` job:

Add your test commands under the appropriate job (see above for summaries on the specific testing related jobs), for example:

```yaml
jobs:
build:
runs-on: ubuntu-latest
steps:
# Other steps...
- name: Run tests
run: |
pytest tests/
```
### sceptre-deploy-develop
### integration-test-develop-cleanup
Expand Down
28 changes: 19 additions & 9 deletions .github/workflows/upload-and-deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -134,11 +134,14 @@ jobs:
pipenv install ecs_logging~=2.0
pipenv install pytest-datadir
- name: Test lambda scripts with pytest
- name: Test scripts with pytest (lambda, etc.)
run: |
pipenv run python -m pytest tests/test_s3_event_config_lambda.py -v
pipenv run python -m pytest tests/test_s3_to_glue_lambda.py -v
pipenv run python -m pytest -v tests/test_lambda_raw.py
pipenv run python -m pytest \
tests/test_s3_event_config_lambda.py \
tests/test_s3_to_glue_lambda.py \
tests/test_lambda_dispatch.py \
tests/test_consume_logs.py \
tests/test_lambda_raw.py -v
- name: Test dev synapse folders for STS access with pytest
run: >
Expand Down Expand Up @@ -249,18 +252,25 @@ jobs:
if: github.ref_name != 'main'
run: echo "NAMESPACE=$GITHUB_REF_NAME" >> $GITHUB_ENV

- name: Run Pytest unit tests under AWS 3.0
- name: Run Pytest unit tests under AWS Glue 3.0
if: matrix.tag_name == 'aws_glue_3'
run: |
su - glue_user --command "cd $GITHUB_WORKSPACE && python3 -m pytest tests/test_s3_to_json.py -v"
su - glue_user --command "cd $GITHUB_WORKSPACE && python3 -m pytest tests/test_compare_parquet_datasets.py -v"
su - glue_user --command "cd $GITHUB_WORKSPACE && python3 -m pytest \
tests/test_s3_to_json.py \
tests/test_compare_parquet_datasets.py -v"
- name: Run Pytest unit tests under AWS 4.0
- name: Run unit tests for JSON to Parquet under AWS Glue 4.0
if: matrix.tag_name == 'aws_glue_4'
run: >
su - glue_user --command "cd $GITHUB_WORKSPACE &&
python3 -m pytest tests/test_json_to_parquet.py --namespace $NAMESPACE -v"
- name: Run unit tests for Great Expectations on Parquet under AWS Glue 4.0
if: matrix.tag_name == 'aws_glue_4'
run: >
su - glue_user --command "cd $GITHUB_WORKSPACE &&
python3 -m pytest tests/test_run_great_expectations_on_parquet.py -v"
sceptre-deploy-develop:
name: Deploys branch using sceptre
runs-on: ubuntu-latest
Expand All @@ -287,7 +297,7 @@ jobs:
run: echo "NAMESPACE=$GITHUB_REF_NAME" >> $GITHUB_ENV

- name: "Deploy sceptre stacks to dev"
run: pipenv run sceptre --var "namespace=${{ env.NAMESPACE }}" launch develop --yes
run: pipenv run sceptre --debug --var "namespace=${{ env.NAMESPACE }}" launch develop --yes

- name: Delete preexisting S3 event notification for this namespace
uses: gagoar/invoke-aws-lambda@v3
Expand Down
1 change: 1 addition & 0 deletions config/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ template_key_prefix: "{{ var.namespace | default('main') }}/templates"
glue_python_shell_python_version: "3.9"
glue_python_shell_glue_version: "3.0"
json_to_parquet_glue_version: "4.0"
great_expectations_job_glue_version: "4.0"
default_stack_tags:
Department: DNT
Project: recover
Expand Down
1 change: 1 addition & 0 deletions config/develop/glue-job-role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ parameters:
S3IntermediateBucketName: {{ stack_group_config.intermediate_bucket_name }}
S3ParquetBucketName: {{ stack_group_config.processed_data_bucket_name }}
S3ArtifactBucketName: {{ stack_group_config.template_bucket_name }}
S3ShareableArtifactBucketName: {{ stack_group_config.shareable_artifacts_vpn_bucket_name }}
stack_tags:
{{ stack_group_config.default_stack_tags }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
template:
path: glue-job-run-great-expectations-on-parquet.j2
dependencies:
- develop/glue-job-role.yaml
stack_name: "{{ stack_group_config.namespace }}-glue-job-RunGreatExpectationsParquet"
parameters:
Namespace: {{ stack_group_config.namespace }}
JobDescription: Runs great expectations on a set of data
JobRole: !stack_output_external glue-job-role::RoleArn
TempS3Bucket: {{ stack_group_config.processed_data_bucket_name }}
S3ScriptBucket: {{ stack_group_config.template_bucket_name }}
S3ScriptKey: '{{ stack_group_config.namespace }}/src/glue/jobs/run_great_expectations_on_parquet.py'
GlueVersion: "{{ stack_group_config.great_expectations_job_glue_version }}"
AdditionalPythonModules: "great_expectations~=0.18,urllib3<2"
stack_tags:
{{ stack_group_config.default_stack_tags }}
sceptre_user_data:
dataset_schemas: !file src/glue/resources/table_columns.yaml
4 changes: 4 additions & 0 deletions config/develop/namespaced/glue-workflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ dependencies:
- develop/namespaced/glue-job-S3ToJsonS3.yaml
- develop/namespaced/glue-job-JSONToParquet.yaml
- develop/namespaced/glue-job-compare-parquet.yaml
- develop/namespaced/glue-job-run-great-expectations-on-parquet.yaml
- develop/glue-job-role.yaml
- develop/s3-cloudformation-bucket.yaml
parameters:
Expand All @@ -19,7 +20,10 @@ parameters:
CompareParquetMainNamespace: "main"
S3SourceBucketName: {{ stack_group_config.input_bucket_name }}
CloudformationBucketName: {{ stack_group_config.template_bucket_name }}
ShareableArtifactsBucketName: {{ stack_group_config.shareable_artifacts_vpn_bucket_name }}
ExpectationSuiteKey: "{{ stack_group_config.namespace }}/src/glue/resources/data_values_expectations.json"
stack_tags:
{{ stack_group_config.default_stack_tags }}
sceptre_user_data:
dataset_schemas: !file src/glue/resources/table_columns.yaml
data_values_expectations: !file src/glue/resources/data_values_expectations.json
1 change: 1 addition & 0 deletions config/prod/glue-job-role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ parameters:
S3IntermediateBucketName: {{ stack_group_config.intermediate_bucket_name }}
S3ParquetBucketName: {{ stack_group_config.processed_data_bucket_name }}
S3ArtifactBucketName: {{ stack_group_config.template_bucket_name }}
S3ShareableArtifactBucketName: {{ stack_group_config.shareable_artifacts_vpn_bucket_name }}
stack_tags:
{{ stack_group_config.default_stack_tags }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
template:
path: glue-job-run-great-expectations-on-parquet.j2
dependencies:
- prod/glue-job-role.yaml
stack_name: "{{ stack_group_config.namespace }}-glue-job-RunGreatExpectationsParquet"
parameters:
Namespace: {{ stack_group_config.namespace }}
JobDescription: Runs great expectations on a set of data
JobRole: !stack_output_external glue-job-role::RoleArn
TempS3Bucket: {{ stack_group_config.processed_data_bucket_name }}
S3ScriptBucket: {{ stack_group_config.template_bucket_name }}
S3ScriptKey: '{{ stack_group_config.namespace }}/src/glue/jobs/run_great_expectations_on_parquet.py'
GlueVersion: "{{ stack_group_config.great_expectations_job_glue_version }}"
AdditionalPythonModules: "great_expectations~=0.18,urllib3<2"
stack_tags:
{{ stack_group_config.default_stack_tags }}
sceptre_user_data:
dataset_schemas: !file src/glue/resources/table_columns.yaml
4 changes: 4 additions & 0 deletions config/prod/namespaced/glue-workflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ dependencies:
- prod/namespaced/glue-job-S3ToJsonS3.yaml
- prod/namespaced/glue-job-JSONToParquet.yaml
- prod/namespaced/glue-job-compare-parquet.yaml
- prod/namespaced/glue-job-run-great-expectations-on-parquet.yaml
- prod/glue-job-role.yaml
- prod/s3-cloudformation-bucket.yaml
parameters:
Expand All @@ -19,7 +20,10 @@ parameters:
CompareParquetMainNamespace: "main"
S3SourceBucketName: {{ stack_group_config.input_bucket_name }}
CloudformationBucketName: {{ stack_group_config.template_bucket_name }}
ShareableArtifactsBucketName: {{ stack_group_config.shareable_artifacts_vpn_bucket_name }}
ExpectationSuiteKey: "{{ stack_group_config.namespace }}/src/glue/resources/data_values_expectations.json"
stack_tags:
{{ stack_group_config.default_stack_tags }}
sceptre_user_data:
dataset_schemas: !file src/glue/resources/table_columns.yaml
data_values_expectations: !file src/glue/resources/data_values_expectations.json
Loading

0 comments on commit 30d1873

Please sign in to comment.