Skip to content

Commit

Permalink
Fix logic when enabled = false. Update tests. Update module version…
Browse files Browse the repository at this point in the history
…s and GitHub workflows (#53)

* Update versions

* Updates

* Updates

* Updates

* Updates
  • Loading branch information
aknysh authored May 17, 2023
1 parent c0c2591 commit baab038
Show file tree
Hide file tree
Showing 15 changed files with 221 additions and 42 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright 2020-2021 Cloud Posse, LLC
Copyright 2020-2023 Cloud Posse, LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ Available targets:
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_additional_tag_map"></a> [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.<br>This is for some rare cases where resources want additional configuration of tags<br>and therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| <a name="input_api_key_ssm_arn"></a> [api\_key\_ssm\_arn](#input\_api\_key\_ssm\_arn) | SSM Arn of the Datadog API key, passing this removes the need to fetch the key from the SSM parameter store. This could be the case if the SSM Key is in a different region than the lambda. | `string` | `null` | no |
| <a name="input_api_key_ssm_arn"></a> [api\_key\_ssm\_arn](#input\_api\_key\_ssm\_arn) | ARN of the SSM parameter for the Datadog API key.<br>Passing this removes the need to fetch the key from the SSM parameter store.<br>This could be the case if the SSM Key is in a different region than the lambda. | `string` | `null` | no |
| <a name="input_attributes"></a> [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,<br>in the order they appear in the list. New attributes are appended to the<br>end of the list. The elements of the list are joined by the `delimiter`<br>and treated as a single ID element. | `list(string)` | `[]` | no |
| <a name="input_cloudwatch_forwarder_event_patterns"></a> [cloudwatch\_forwarder\_event\_patterns](#input\_cloudwatch\_forwarder\_event\_patterns) | Map of title => CloudWatch Event patterns to forward to Datadog. Event structure from here: <https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html#CloudWatchEventsPatterns><br>Example:<pre>hcl<br>cloudwatch_forwarder_event_rules = {<br> "guardduty" = {<br> source = ["aws.guardduty"]<br> detail-type = ["GuardDuty Finding"]<br> }<br> "ec2-terminated" = {<br> source = ["aws.ec2"]<br> detail-type = ["EC2 Instance State-change Notification"]<br> detail = {<br> state = ["terminated"]<br> }<br> }<br>}</pre> | <pre>map(object({<br> version = optional(list(string))<br> id = optional(list(string))<br> detail-type = optional(list(string))<br> source = optional(list(string))<br> account = optional(list(string))<br> time = optional(list(string))<br> region = optional(list(string))<br> resources = optional(list(string))<br> detail = optional(map(list(string)))<br> }))</pre> | `{}` | no |
| <a name="input_cloudwatch_forwarder_log_groups"></a> [cloudwatch\_forwarder\_log\_groups](#input\_cloudwatch\_forwarder\_log\_groups) | Map of CloudWatch Log Groups with a filter pattern that the Lambda forwarder will send logs from. For example: { mysql1 = { name = "/aws/rds/maincluster", filter\_pattern = "" } | <pre>map(object({<br> name = string<br> filter_pattern = string<br> }))</pre> | `{}` | no |
Expand Down
2 changes: 1 addition & 1 deletion docs/terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_additional_tag_map"></a> [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.<br>This is for some rare cases where resources want additional configuration of tags<br>and therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
| <a name="input_api_key_ssm_arn"></a> [api\_key\_ssm\_arn](#input\_api\_key\_ssm\_arn) | SSM Arn of the Datadog API key, passing this removes the need to fetch the key from the SSM parameter store. This could be the case if the SSM Key is in a different region than the lambda. | `string` | `null` | no |
| <a name="input_api_key_ssm_arn"></a> [api\_key\_ssm\_arn](#input\_api\_key\_ssm\_arn) | ARN of the SSM parameter for the Datadog API key.<br>Passing this removes the need to fetch the key from the SSM parameter store.<br>This could be the case if the SSM Key is in a different region than the lambda. | `string` | `null` | no |
| <a name="input_attributes"></a> [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,<br>in the order they appear in the list. New attributes are appended to the<br>end of the list. The elements of the list are joined by the `delimiter`<br>and treated as a single ID element. | `list(string)` | `[]` | no |
| <a name="input_cloudwatch_forwarder_event_patterns"></a> [cloudwatch\_forwarder\_event\_patterns](#input\_cloudwatch\_forwarder\_event\_patterns) | Map of title => CloudWatch Event patterns to forward to Datadog. Event structure from here: <https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html#CloudWatchEventsPatterns><br>Example:<pre>hcl<br>cloudwatch_forwarder_event_rules = {<br> "guardduty" = {<br> source = ["aws.guardduty"]<br> detail-type = ["GuardDuty Finding"]<br> }<br> "ec2-terminated" = {<br> source = ["aws.ec2"]<br> detail-type = ["EC2 Instance State-change Notification"]<br> detail = {<br> state = ["terminated"]<br> }<br> }<br>}</pre> | <pre>map(object({<br> version = optional(list(string))<br> id = optional(list(string))<br> detail-type = optional(list(string))<br> source = optional(list(string))<br> account = optional(list(string))<br> time = optional(list(string))<br> region = optional(list(string))<br> resources = optional(list(string))<br> detail = optional(map(list(string)))<br> }))</pre> | `{}` | no |
| <a name="input_cloudwatch_forwarder_log_groups"></a> [cloudwatch\_forwarder\_log\_groups](#input\_cloudwatch\_forwarder\_log\_groups) | Map of CloudWatch Log Groups with a filter pattern that the Lambda forwarder will send logs from. For example: { mysql1 = { name = "/aws/rds/maincluster", filter\_pattern = "" } | <pre>map(object({<br> name = string<br> filter_pattern = string<br> }))</pre> | `{}` | no |
Expand Down
3 changes: 2 additions & 1 deletion examples/complete/main.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module "cloudwatch_logs" {
source = "cloudposse/cloudwatch-logs/aws"
version = "0.6.1"
version = "0.6.6"

name = "postgresql"
context = module.this.context
Expand All @@ -13,6 +13,7 @@ resource "aws_ssm_parameter" "datadog_key" {
description = "Test Datadog key"
type = "SecureString"
value = "testkey"
overwrite = true
}

module "datadog_lambda_log_forwarder" {
Expand Down
18 changes: 12 additions & 6 deletions lambda-log.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ module "forwarder_log_s3_label" {
}

module "forwarder_log_artifact" {
count = local.lambda_enabled && var.forwarder_log_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_log_enabled ? 1 : 0

source = "cloudposse/module-artifact/external"
version = "0.7.2"

Expand Down Expand Up @@ -113,7 +114,8 @@ resource "aws_lambda_function" "forwarder_log" {
}

resource "aws_lambda_permission" "allow_s3_bucket" {
for_each = local.s3_logs_enabled ? local.s3_bucket_names_to_authorize : []
for_each = local.s3_logs_enabled ? local.s3_bucket_names_to_authorize : []

statement_id = "AllowS3ToInvokeLambda-${each.value}"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.forwarder_log[0].arn
Expand All @@ -123,7 +125,8 @@ resource "aws_lambda_permission" "allow_s3_bucket" {

resource "aws_s3_bucket_notification" "s3_bucket_notification" {
for_each = local.s3_logs_enabled ? toset(var.s3_buckets) : []
bucket = each.key

bucket = each.key

lambda_function {
lambda_function_arn = aws_lambda_function.forwarder_log[0].arn
Expand All @@ -135,7 +138,8 @@ resource "aws_s3_bucket_notification" "s3_bucket_notification" {

resource "aws_s3_bucket_notification" "s3_bucket_notification_with_prefixes" {
for_each = local.s3_logs_enabled ? var.s3_buckets_with_prefixes : {}
bucket = each.value.bucket_name

bucket = each.value.bucket_name

lambda_function {
lambda_function_arn = aws_lambda_function.forwarder_log[0].arn
Expand Down Expand Up @@ -178,7 +182,8 @@ data "aws_iam_policy_document" "s3_log_bucket" {
}

resource "aws_iam_policy" "lambda_forwarder_log_s3" {
count = local.s3_logs_enabled ? 1 : 0
count = local.s3_logs_enabled ? 1 : 0

name = module.forwarder_log_s3_label.id
path = var.forwarder_iam_path
description = "Allow Datadog Lambda Logs Forwarder to access S3 buckets"
Expand All @@ -187,7 +192,8 @@ resource "aws_iam_policy" "lambda_forwarder_log_s3" {
}

resource "aws_iam_role_policy_attachment" "datadog_s3" {
count = local.s3_logs_enabled ? 1 : 0
count = local.s3_logs_enabled ? 1 : 0

role = join("", aws_iam_role.lambda_forwarder_log[*].name)
policy_arn = join("", aws_iam_policy.lambda_forwarder_log_s3[*].arn)
}
Expand Down
9 changes: 6 additions & 3 deletions lambda-rds.tf
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ module "forwarder_rds_label" {
}

module "forwarder_rds_artifact" {
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0

source = "cloudposse/module-artifact/external"
version = "0.7.2"

Expand All @@ -32,7 +33,8 @@ module "forwarder_rds_artifact" {
}

data "archive_file" "forwarder_rds" {
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0

type = "zip"
source_file = module.forwarder_rds_artifact[0].file
output_path = "${path.module}/lambda.zip"
Expand Down Expand Up @@ -117,7 +119,8 @@ resource "aws_lambda_permission" "cloudwatch_enhanced_rds_monitoring" {
}

resource "aws_cloudwatch_log_subscription_filter" "datadog_log_subscription_filter_rds" {
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_rds_enabled ? 1 : 0

name = module.forwarder_rds_label.id
log_group_name = "RDSOSMetrics"
destination_arn = aws_lambda_function.forwarder_rds[0].arn
Expand Down
9 changes: 6 additions & 3 deletions lambda-vpc-logs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@ module "forwarder_vpclogs_label" {
}

module "forwarder_vpclogs_artifact" {
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0

source = "cloudposse/module-artifact/external"
version = "0.7.2"

Expand All @@ -31,7 +32,8 @@ module "forwarder_vpclogs_artifact" {
}

data "archive_file" "forwarder_vpclogs" {
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0

type = "zip"
source_file = module.forwarder_vpclogs_artifact[0].file
output_path = "${path.module}/lambda.zip"
Expand Down Expand Up @@ -118,7 +120,8 @@ resource "aws_lambda_permission" "cloudwatch_vpclogs" {
}

resource "aws_cloudwatch_log_subscription_filter" "datadog_log_subscription_filter_vpclogs" {
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0
count = local.lambda_enabled && var.forwarder_vpc_logs_enabled ? 1 : 0

name = module.forwarder_vpclogs_label.id
log_group_name = var.vpclogs_cloudwatch_log_group
destination_arn = aws_lambda_function.forwarder_vpclogs[0].arn
Expand Down
16 changes: 10 additions & 6 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,16 @@ data "aws_region" "current" {

locals {
enabled = module.this.enabled
lambda_enabled = local.enabled

arn_format = local.enabled ? "arn:${data.aws_partition.current[0].partition}" : ""
aws_account_id = join("", data.aws_caller_identity.current[*].account_id)
aws_region = join("", data.aws_region.current[*].name)
lambda_enabled = local.enabled

dd_api_key_resource = var.dd_api_key_source.resource
dd_api_key_identifier = var.dd_api_key_source.identifier
dd_api_key_arn = local.dd_api_key_resource == "ssm" ? coalesce(var.api_key_ssm_arn, join("", data.aws_ssm_parameter.api_key[*].arn)) : local.dd_api_key_identifier
dd_api_key_resource = var.dd_api_key_source.resource
dd_api_key_identifier = var.dd_api_key_source.identifier

dd_api_key_arn = local.dd_api_key_resource == "ssm" ? try(coalesce(var.api_key_ssm_arn, join("", data.aws_ssm_parameter.api_key[*].arn)), "") : local.dd_api_key_identifier
dd_api_key_iam_actions = [lookup({ kms = "kms:Decrypt", asm = "secretsmanager:GetSecretValue", ssm = "ssm:GetParameter" }, local.dd_api_key_resource, "")]
dd_api_key_kms = local.dd_api_key_resource == "kms" ? { DD_KMS_API_KEY = var.dd_api_key_kms_ciphertext_blob } : {}
dd_api_key_asm = local.dd_api_key_resource == "asm" ? { DD_API_KEY_SECRET_ARN = local.dd_api_key_identifier } : {}
Expand All @@ -43,7 +45,8 @@ locals {

data "aws_ssm_parameter" "api_key" {
count = local.lambda_enabled && local.dd_api_key_resource == "ssm" && var.api_key_ssm_arn == null ? 1 : 0
name = local.dd_api_key_identifier

name = local.dd_api_key_identifier
}

######################################################################
Expand All @@ -70,7 +73,8 @@ data "aws_iam_policy_document" "assume_role" {
## Create Lambda policy and attach it to the Lambda role

resource "aws_iam_policy" "datadog_custom_policy" {
count = local.lambda_enabled && length(var.lambda_policy_source_json) > 0 ? 1 : 0
count = local.lambda_enabled && length(var.lambda_policy_source_json) > 0 ? 1 : 0

name = "DatadogForwarderCustomPolicy"
policy = var.lambda_policy_source_json

Expand Down
4 changes: 2 additions & 2 deletions test/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,11 @@ clean:
all: module examples/complete

## Run basic sanity checks against the module itself
module: export TESTS ?= installed lint get-modules module-pinning get-plugins provider-pinning validate terraform-docs input-descriptions output-descriptions
module: export TESTS ?= installed lint module-pinning provider-pinning validate terraform-docs input-descriptions output-descriptions
module: deps
$(call RUN_TESTS, ../)

## Run tests against example
examples/complete: export TESTS ?= installed lint get-modules get-plugins validate
examples/complete: export TESTS ?= installed lint validate
examples/complete: deps
$(call RUN_TESTS, ../$@)
2 changes: 1 addition & 1 deletion test/src/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ test: init
# This project runs `git` externally, so it needs extra permissions when run by a GitHub Action
[[ -n "$$GITHUB_WORKSPACE" ]] && git config --global --add safe.directory "$$GITHUB_WORKSPACE" || true
go mod download
go test -v -timeout 60m -run TestExamplesComplete
go test -v -timeout 30m

## Run tests in docker container
docker/test:
Expand Down
70 changes: 56 additions & 14 deletions test/src/examples_complete_test.go
Original file line number Diff line number Diff line change
@@ -1,45 +1,87 @@
package test

import (
"math/rand"
"strconv"
"regexp"
"strings"
"testing"
"time"

"github.com/gruntwork-io/terratest/modules/random"
"github.com/gruntwork-io/terratest/modules/terraform"
testStructure "github.com/gruntwork-io/terratest/modules/test-structure"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/util/runtime"
)

// Test the Terraform module in examples/complete using Terratest.
func TestExamplesComplete(t *testing.T) {
t.Parallel()

rand.Seed(time.Now().UnixNano())
randID := strconv.Itoa(rand.Intn(100000))
randID := strings.ToLower(random.UniqueId())
attributes := []string{randID}

rootFolder := "../../"
terraformFolderRelativeToRoot := "examples/complete"
varFiles := []string{"fixtures.us-east-2.tfvars"}

tempTestFolder := testStructure.CopyTerraformFolderToTemp(t, rootFolder, terraformFolderRelativeToRoot)

terraformOptions := &terraform.Options{
// The path to where our Terraform code is located
TerraformDir: "../../examples/complete",
TerraformDir: tempTestFolder,
Upgrade: true,
// Variables to pass to our Terraform code using -var-file options
VarFiles: []string{"fixtures.us-east-2.tfvars"},
// We always include a random attribute so that parallel tests
// and AWS resources do not interfere with each other
VarFiles: varFiles,
Vars: map[string]interface{}{
"attributes": attributes,
},
}

// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
defer cleanup(t, terraformOptions, tempTestFolder)

// If Go runtime crushes, run `terraform destroy` to clean up any resources that were created
defer runtime.HandleCrash(func(i interface{}) {
cleanup(t, terraformOptions, tempTestFolder)
})

// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)

lambdaFunctionName := terraform.Output(t, terraformOptions, "lambda_forwarder_log_function_name")
// Verify we're getting back the outputs we expect
//assert.Contains(t, lambdaRdsArn, expectedlambdaRdsArn)

assert.Equal(t, "eg-ue2-test-datadog-lambda-forwarder-"+randID+"-logs", lambdaFunctionName)
}

func TestExamplesCompleteDisabled(t *testing.T) {
t.Parallel()
randID := strings.ToLower(random.UniqueId())
attributes := []string{randID}

rootFolder := "../../"
terraformFolderRelativeToRoot := "examples/complete"
varFiles := []string{"fixtures.us-east-2.tfvars"}

tempTestFolder := testStructure.CopyTerraformFolderToTemp(t, rootFolder, terraformFolderRelativeToRoot)

terraformOptions := &terraform.Options{
// The path to where our Terraform code is located
TerraformDir: tempTestFolder,
Upgrade: true,
// Variables to pass to our Terraform code using -var-file options
VarFiles: varFiles,
Vars: map[string]interface{}{
"attributes": attributes,
"enabled": false,
},
}

// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer cleanup(t, terraformOptions, tempTestFolder)

// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
results := terraform.InitAndApply(t, terraformOptions)

// Should complete successfully without creating or changing any resources.
// Extract the "Resources:" section of the output to make the error message more readable.
re := regexp.MustCompile(`Resources: [^.]+\.`)
match := re.FindString(results)
assert.Equal(t, "Resources: 0 added, 0 changed, 0 destroyed.", match, "Re-applying the same configuration should not change any resources")
}
Loading

0 comments on commit baab038

Please sign in to comment.