This repository contains a script that creates automated excel reports using the API of Cloudability. It includes the recommendations to rightsize or terminate EC2, RDS, S3 and EBS resources.
Note: The module is found under finops-report-automation
. The code was copied over to this brand new repository from a private repository to eliminate every possible chance to have wrongly tracked a confidential file.
This package uses Poetry to install and define all dependencies. If you have poetry installed, just run:
poetry install
poetry shell # To activate the virtualenv created by poetry
There are two environmental variables that need to be set for the script to work
Environment Variable | Description |
---|---|
API_KEY | Required. Your CloudaAbility API key |
REPORTINGS_PATH | Optional. The folder where the reports will be saved to. Defaults to ./reports |
CONFIG_PATH | Optional. The folder where the configuration files are located. Defaults to ./config. |
BUCKET_NAME | Optional. S3 to upload reports to. Not setting this variable will disable upload. |
USE_LOCAL_DATA | Optional. Use local files containing json data instead of the API. |
DEBUG | Optional. Display debug information. |
There are 3 configuration files that need to be in place in order for the script to make the correct API calls and map the relevant information to specific projects. The config.yaml is needed for general recommendations about your resource, the amortized_cost_config.yaml is needed for the general overview sheet of each project as well as the cross-project overview.
Example files can be found under ./example_configs/
. Copy and create a new file on the CONFIG_PATH
folder.
File | Description |
---|---|
accountMapping.xlsx | Required. Used to map AWS Account IDs to internal project names (e.g: project1 -> 1239585). Note: This file should be removed and replaced with a DB connection to query the project names |
config.yaml | Required. Used to specify filters to retrieve resource specific recommendations from CloudAbility's AWS Rightsizing API |
amortized_cost_config.yaml | Required. Used to specify filters to retrieve the amortized data from CloudAbility's AWS Amortized API |
Once the enviromental variables are configured you can create the reports by running:
make run
# or
python -m finops_report_automation.main
Based on the configuration you defined inside the configuration files you should be able to see the reports generated inside the REPORTINGS_PATH
folder.
This script features an optional integration in AWS. It allows you to upload your project reports, cross-project summaries as well as the raw API data to a S3 bucket.
The folder structure will be as follows:
├── 2022
│ ├── 01
│ │ ├── 22
| | | ├── Project reports
| | | | ├── project_01.xlsx
| | | | ├── project_02.xlsx
| | | ├── cross_project_overview.xlsx
| | | ├── API Response
| | | | ├── <resource>_api.json
| | | | ├── armotized_cost_api.json
│ ├── ...
│ ├── 12
To run the container we propose Amazon ECS + Fargate. In contrast to AWS Lambda we don't have to alter application code to run it. This also means we can still run the container locally. With the AWS SDK we can use other AWS services like S3.
Prerequisite:
- S3 bucket to store data in
- IAM role including two permission policies:
- AmazonECSTaskExecutionRolePolicy
- Policy with s3:PutObject & s3:GetObject allowed on S3 bucket.
- An ECR
Go to Amazon ECS and create a task definition. Add your Image URI and the API_KEY & BUCKET_NAME environmental variables. For Task Role select your IAM role with ECS and S3 policies. The policies you need depend on the SDK calls you make in the container. For the moment only upload to S3 is supported. SQS calls need additional policies.
Next we will create a cluster provide an environment to run the task in. After you've created a cluster and a task definition, you can create a trigger for your task. We accomplish this via an Amazon EventBridge Rule. See here for the official documentation.
To run the test suite, open a shell with the activated virtualenvironment and run the following:
pytest
This should automatically run all the tests located inside the tests
folder
It's possible to manual check the linting and styling of the code by running:
make flake
You can also generate your reports using a Docker container. Run the following:
docker build . -t reports
docker run -e API_KEY=<your-key-here> -e REPORTINGS_PATH=/reports -v <local-report-directory>:/reports reports