diff --git a/.changelog/10539.txt b/.changelog/10539.txt new file mode 100644 index 00000000000..0b2a3ff42e8 --- /dev/null +++ b/.changelog/10539.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudformation_stack: Avoid conflicts with `on_failure` and `disable_rollback` +``` diff --git a/.changelog/11936.txt b/.changelog/11936.txt new file mode 100644 index 00000000000..0657ce4442c --- /dev/null +++ b/.changelog/11936.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_amplify_backend_environment +``` \ No newline at end of file diff --git a/.changelog/11937.txt b/.changelog/11937.txt new file mode 100644 index 00000000000..9da502923d7 --- /dev/null +++ b/.changelog/11937.txt @@ -0,0 +1,7 @@ +```release-note:new-resource +aws_amplify_branch +``` + +```release-note:bug +resource/aws_amplify_app: Mark the `enable_performance_mode` argument in the `auto_branch_creation_config` configuration block as `ForceNew` +``` \ No newline at end of file diff --git a/.changelog/11938.txt b/.changelog/11938.txt new file mode 100644 index 00000000000..29730594689 --- /dev/null +++ b/.changelog/11938.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_amplify_domain_association +``` \ No newline at end of file diff --git a/.changelog/11939.txt b/.changelog/11939.txt new file mode 100644 index 00000000000..7555c387a8c --- /dev/null +++ b/.changelog/11939.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_amplify_webhook +``` \ No newline at end of file diff --git a/.changelog/15966.txt b/.changelog/15966.txt new file mode 100644 index 00000000000..feafab97221 --- /dev/null +++ b/.changelog/15966.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_amplify_app +``` \ No newline at end of file diff --git a/.changelog/16049.txt b/.changelog/16049.txt new file mode 100644 index 00000000000..efd40e9764c --- /dev/null +++ b/.changelog/16049.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudfront_distribution: Add `connection_attempts`, `connection_timeout`, and `origin_shield`. +``` diff --git a/.changelog/17571.txt b/.changelog/17571.txt new file mode 100644 index 00000000000..af769263bcc --- /dev/null +++ b/.changelog/17571.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_msk_configuration: `kafka_versions` argument is optional +``` \ No newline at end of file diff --git a/.changelog/17573.txt b/.changelog/17573.txt new file mode 100644 index 00000000000..3532b0a8319 --- /dev/null +++ b/.changelog/17573.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_vpn_connection: Allow `local_ipv4_network_cidr`, `remote_ipv4_network_cidr`, `local_ipv6_network_cidr`, and `remote_ipv6_network_cidr` to be CIDRs of any size +``` diff --git a/.changelog/18905.txt b/.changelog/18905.txt new file mode 100644 index 00000000000..69e0051652b --- /dev/null +++ b/.changelog/18905.txt @@ -0,0 +1,11 @@ +```release-note:new-resource +aws_cloudwatch_event_connection +``` + +```release-note:new-resource +aws_cloudwatch_event_api_destination +``` + +```release-note:new-data-source +aws_cloudwatch_event_connection +``` \ No newline at end of file diff --git a/.changelog/19100.txt b/.changelog/19100.txt new file mode 100644 index 00000000000..471cee52e86 --- /dev/null +++ b/.changelog/19100.txt @@ -0,0 +1,11 @@ +```release-note:new-resource +aws_schemas_discoverer +``` + +```release-note:new-resource +aws_schemas_registry +``` + +```release-note:new-resource +aws_schemas_schema +``` \ No newline at end of file diff --git a/.changelog/19316.txt b/.changelog/19316.txt new file mode 100644 index 00000000000..e29d6221bf7 --- /dev/null +++ b/.changelog/19316.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_servicecatalog_provisioning_artifact +``` diff --git a/.changelog/19375.txt b/.changelog/19375.txt new file mode 100644 index 00000000000..57e09778192 --- /dev/null +++ b/.changelog/19375.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_glue_connection: `connection_properties` are optional +``` \ No newline at end of file diff --git a/.changelog/19391.txt b/.changelog/19391.txt new file mode 100644 index 00000000000..113f81bf3fc --- /dev/null +++ b/.changelog/19391.txt @@ -0,0 +1,3 @@ +```release-notes:new-data-source +aws_default_tags +``` \ No newline at end of file diff --git a/.changelog/19404.txt b/.changelog/19404.txt new file mode 100644 index 00000000000..d48647b4860 --- /dev/null +++ b/.changelog/19404.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +data-source/aws_msk_cluster: Add `bootstrap_brokers_sasl_iam` attribute +``` + +```release-note:enhancement +resource/aws_msk_cluster: Add `bootstrap_brokers_sasl_iam` attribute +``` + +```release-note:enhancement +resource/aws_msk_cluster: Add `iam` argument to `client_authentication.sasl` configuration block +``` \ No newline at end of file diff --git a/.changelog/19415.txt b/.changelog/19415.txt new file mode 100644 index 00000000000..6ff2032e1a3 --- /dev/null +++ b/.changelog/19415.txt @@ -0,0 +1,15 @@ +```release-notes:enhancement +resource/aws_wafv2_web_acl: Add `custom_request_handling` to `allow` and `count` default action and rule actions. +``` + +```release-notes:enhancement +resource/aws_wafv2_web_acl: Add `custom_response` to `block` default action and rule actions. +``` + +```release-notes:enhancement +resource/aws_wafv2_rule_group: Add `custom_request_handling` to `allow` and `count` rule actions. +``` + +```release-notes:enhancement +resource/aws_wafv2_rule_group: Add `custom_response` to `block` rule actions. +``` diff --git a/.changelog/19425.txt b/.changelog/19425.txt new file mode 100644 index 00000000000..270d361e8a9 --- /dev/null +++ b/.changelog/19425.txt @@ -0,0 +1,15 @@ +```release-notes:enhancement +resource/aws_lambda_event_source_mapping: Add `self_managed_event_source` and `source_access_configuration` arguments to support self-managed Apache Kafka event sources +``` + +```release-notes:enhancement +resource/aws_lambda_event_source_mapping: Add `tumbling_window_in_seconds` argument to support AWS Lambda streaming analytics calculations +``` + +```release-notes:enhancement +resource/aws_lambda_event_source_mapping: Add `function_response_types` argument to support AWS Lambda checkpointing +``` + +```release-notes:enhancement +resource/aws_lambda_event_source_mapping: Add `queues` argument to support Amazon MQ for Apache ActiveMQ event sources +``` \ No newline at end of file diff --git a/.changelog/19448.txt b/.changelog/19448.txt new file mode 100644 index 00000000000..6b7ca150797 --- /dev/null +++ b/.changelog/19448.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_servicecatalog_tag_option_resource_association +``` \ No newline at end of file diff --git a/.changelog/19452.txt b/.changelog/19452.txt new file mode 100644 index 00000000000..970016a5d0e --- /dev/null +++ b/.changelog/19452.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_servicecatalog_budget_resource_association +``` \ No newline at end of file diff --git a/.changelog/19454.txt b/.changelog/19454.txt new file mode 100644 index 00000000000..6ff5b428703 --- /dev/null +++ b/.changelog/19454.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_eks_addon: Use `service_account_role_arn`, if set, on updates +``` \ No newline at end of file diff --git a/.changelog/19470.txt b/.changelog/19470.txt new file mode 100644 index 00000000000..6d33e373969 --- /dev/null +++ b/.changelog/19470.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_servicecatalog_principal_portfolio_association +``` \ No newline at end of file diff --git a/.changelog/19471.txt b/.changelog/19471.txt new file mode 100644 index 00000000000..820b41723fc --- /dev/null +++ b/.changelog/19471.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_apprunner_service: Correctly configure `authentication_configuration`, `code_configuration`, and `image_configuration` nested arguments in API requests +``` diff --git a/.changelog/19482.txt b/.changelog/19482.txt new file mode 100644 index 00000000000..f43aebc5884 --- /dev/null +++ b/.changelog/19482.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_eks_node_group: Add `taint` argument +``` \ No newline at end of file diff --git a/.changelog/19483.txt b/.changelog/19483.txt new file mode 100644 index 00000000000..be4cf576068 --- /dev/null +++ b/.changelog/19483.txt @@ -0,0 +1,7 @@ +```release-note:bug +resource/aws_apprunner_service: Handle asynchronous IAM eventual consistency error on creation +``` + +```release-note:bug +resource/aws_apprunner_service: Suppress `instance_configuration` `cpu` and `memory` differences +``` diff --git a/.changelog/19492.txt b/.changelog/19492.txt new file mode 100644 index 00000000000..5b3b001b89d --- /dev/null +++ b/.changelog/19492.txt @@ -0,0 +1,3 @@ +```release-note:bug +data-source/aws_launch_template: Add `interface_type` to `network_interfaces` attribute +``` \ No newline at end of file diff --git a/.changelog/19496.txt b/.changelog/19496.txt new file mode 100644 index 00000000000..5bbf54b65d5 --- /dev/null +++ b/.changelog/19496.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_lb_listener_rule: Allow blank string for `action.redirect.query` nested argument +``` \ No newline at end of file diff --git a/.changelog/19499.txt b/.changelog/19499.txt new file mode 100644 index 00000000000..92a274598bb --- /dev/null +++ b/.changelog/19499.txt @@ -0,0 +1,3 @@ +```release-notes:new-data-source +aws_servicecatalog_constraint +``` \ No newline at end of file diff --git a/.changelog/19502.txt b/.changelog/19502.txt new file mode 100644 index 00000000000..2176316b46d --- /dev/null +++ b/.changelog/19502.txt @@ -0,0 +1,3 @@ +```release-note:bug +data-source/aws_mq_broker: Correct type for `logs.audit` attribute +``` \ No newline at end of file diff --git a/.changelog/19505.txt b/.changelog/19505.txt new file mode 100644 index 00000000000..89dbec57814 --- /dev/null +++ b/.changelog/19505.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_batch_job_definition: Don't crash when setting `timeout.attempt_duration_seconds` to `null` +``` \ No newline at end of file diff --git a/.changelog/19515.txt b/.changelog/19515.txt new file mode 100644 index 00000000000..f41d46513e5 --- /dev/null +++ b/.changelog/19515.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_synthetics_canary: Change minimum `timeout_in_seconds` in `run_config` from `60` to `3` +``` diff --git a/.changelog/19517.txt b/.changelog/19517.txt new file mode 100644 index 00000000000..21ad4c42fe8 --- /dev/null +++ b/.changelog/19517.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_ec2_managed_prefix_list: Fix crash with multiple description-only updates +``` \ No newline at end of file diff --git a/.changelog/19528.txt b/.changelog/19528.txt new file mode 100644 index 00000000000..95babeb7b16 --- /dev/null +++ b/.changelog/19528.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_sns_topic: Add `firehose_success_feedback_role_arn`, `firehose_success_feedback_sample_rate` and `firehose_failure_feedback_role_arn` arguments. +``` + +```release-note:enhancement +resource/aws_sns_topic: Add plan time validation for `application_success_feedback_role_arn`, `application_failure_feedback_role_arn`, `http_success_feedback_role_arn`, `http_failure_feedback_role_arn`, `lambda_success_feedback_role_arn`, `lambda_failure_feedback_role_arn`, `sqs_success_feedback_role_arn`, `sqs_failure_feedback_role_arn`. +``` + +```release-note:enhancement +resource/aws_sns_topic: Add `owner` attribute. +``` \ No newline at end of file diff --git a/.changelog/19535.txt b/.changelog/19535.txt new file mode 100644 index 00000000000..50c62e684b8 --- /dev/null +++ b/.changelog/19535.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_ec2_capacity_reservation: Add `outpost_arn` argument +``` diff --git a/.changelog/19551.txt b/.changelog/19551.txt new file mode 100644 index 00000000000..622165ffd20 --- /dev/null +++ b/.changelog/19551.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_elasticache_parameter_group: Add `tags` argument and `arn` and `tags_all` attributes +``` \ No newline at end of file diff --git a/.changelog/19557.txt b/.changelog/19557.txt new file mode 100644 index 00000000000..f5d4696408b --- /dev/null +++ b/.changelog/19557.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_ecs_service: Add support for ECS Anywhere with the `launch_type` `EXTERNAL` +``` diff --git a/.changelog/19559.txt b/.changelog/19559.txt new file mode 100644 index 00000000000..92c8b1a2138 --- /dev/null +++ b/.changelog/19559.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudtrail: Add `AWS::DynamoDB::Table` as an option for `event_selector`.`data_resource`.`type` +``` \ No newline at end of file diff --git a/.changelog/19568.txt b/.changelog/19568.txt new file mode 100644 index 00000000000..b369c83530b --- /dev/null +++ b/.changelog/19568.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fsx_lustre_filesystem: Allow updating `storage_capacity`. +``` \ No newline at end of file diff --git a/.changelog/19571.txt b/.changelog/19571.txt new file mode 100644 index 00000000000..b2ba1cf5694 --- /dev/null +++ b/.changelog/19571.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudwatch_metric_alarm: Add plan time validation to `metric_query.metric.stat`. +``` \ No newline at end of file diff --git a/.changelog/19574.txt b/.changelog/19574.txt new file mode 100644 index 00000000000..be8af655535 --- /dev/null +++ b/.changelog/19574.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_devicefarm_project: Add `default_job_timeout_minutes` and `tags` argument +``` + +```release-note:enhancement +resource/aws_devicefarm_project: Add plan time validation for `name` +``` \ No newline at end of file diff --git a/.changelog/19578.txt b/.changelog/19578.txt new file mode 100644 index 00000000000..c02aad104ec --- /dev/null +++ b/.changelog/19578.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_acmpca_certificate_authority: Add `s3_object_acl` argument to `revocation_configuration.crl_configuration` configuration block +``` \ No newline at end of file diff --git a/.changelog/19594.txt b/.changelog/19594.txt new file mode 100644 index 00000000000..3c8b2b5ca13 --- /dev/null +++ b/.changelog/19594.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudwatch_event_api_destination: Reduce the maximum allowed value for the `invocation_rate_limit_per_second` argument to `300` +``` diff --git a/.changelog/19606.txt b/.changelog/19606.txt new file mode 100644 index 00000000000..a41995893b1 --- /dev/null +++ b/.changelog/19606.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_iam_access_key: Fix status not defaulting to Active +``` diff --git a/.changelog/19615.txt b/.changelog/19615.txt new file mode 100644 index 00000000000..109278c5337 --- /dev/null +++ b/.changelog/19615.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_elasticache_cluster: Fix provider-level `default_tags` support for resource +``` diff --git a/.changelog/19625.txt b/.changelog/19625.txt new file mode 100644 index 00000000000..4c3e1cc1dda --- /dev/null +++ b/.changelog/19625.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudwatch_log_metric_filter: Add `dimensions` argument to `metric_transformation` configuration block +``` \ No newline at end of file diff --git a/.changelog/19632.txt b/.changelog/19632.txt new file mode 100644 index 00000000000..12cd701def3 --- /dev/null +++ b/.changelog/19632.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_launch_configuration: Add `throughput` argument to `ebs_block_device` and `root_block_device` configuration blocks to support GP3 volumes +``` + +```release-note:enhancement +data-source/aws_launch_configuration: Add `throughput` attribute to `ebs_block_device` and `root_block_device` configuration blocks to support GP3 volumes +``` \ No newline at end of file diff --git a/.changelog/19654.txt b/.changelog/19654.txt new file mode 100644 index 00000000000..3e8303887fb --- /dev/null +++ b/.changelog/19654.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudwatch_event_api_destination: Fix crash on resource update +``` \ No newline at end of file diff --git a/.github/labeler-issue-triage.yml b/.github/labeler-issue-triage.yml index 8b506731717..5aa0a23b7fb 100644 --- a/.github/labeler-issue-triage.yml +++ b/.github/labeler-issue-triage.yml @@ -14,3 +14,353 @@ bug: - "(doesn't support update|failed to satisfy constraint: Member must not be null|Invalid address to set|panic:|produced an (invalid|unexpected) new value|Provider produced inconsistent (final plan|result after apply))" crash: - 'panic:' +# +# AWS Per-Service Labeling +# +# Catch the following in issues to prevent false positives: +# *aws_XXX +# * aws_XXX +# * `aws_XXX` +# -aws_XXX +# - aws_XXX +# - `aws_XXX` +# data aws_XXX +# data "aws_XXX" +# resource aws_XXX +# resource "aws_XXX" +service/accessanalyzer: + - '((\*|-) ?`?|(data|resource) "?)aws_accessanalyzer_' +service/acm: + - '((\*|-) ?`?|(data|resource) "?)aws_acm_' +service/acmpca: + - '((\*|-) ?`?|(data|resource) "?)aws_acmpca_' +service/alexaforbusiness: + - '((\*|-) ?`?|(data|resource) "?)aws_alexaforbusiness_' +service/amplify: + - '((\*|-) ?`?|(data|resource) "?)aws_amplify_' +service/apigateway: + - '((\*|-) ?`?|(data|resource) "?)aws_api_gateway_' +service/apigatewayv2: + - '((\*|-) ?`?|(data|resource) "?)aws_apigatewayv2_' +service/appconfig: + - '((\*|-) ?`?|(data|resource) "?)aws_appconfig_' +service/applicationautoscaling: + - '((\*|-) ?`?|(data|resource) "?)aws_appautoscaling_' +service/applicationdiscoveryservice: + - '((\*|-) ?`?|(data|resource) "?)aws_applicationdiscoveryservice_' +service/applicationinsights: + - '((\*|-) ?`?|(data|resource) "?)aws_applicationinsights_' +service/appmesh: + - '((\*|-) ?`?|(data|resource) "?)aws_appmesh_' +service/apprunner: + - '((\*|-) ?`?|(data|resource) "?)aws_apprunner_' +service/appstream: + - '((\*|-) ?`?|(data|resource) "?)aws_appstream_' +service/appsync: + - '((\*|-) ?`?|(data|resource) "?)aws_appsync_' +service/athena: + - '((\*|-) ?`?|(data|resource) "?)aws_athena_' +service/auditmanager: + - '((\*|-) ?`?|(data|resource) "?)aws_auditmanager_' +service/autoscaling: + - '((\*|-) ?`?|(data|resource) "?)aws_(autoscaling_|launch_configuration)' +service/autoscalingplans: + - '((\*|-) ?`?|(data|resource) "?)aws_autoscalingplans_' +service/backup: + - '((\*|-) ?`?|(data|resource) "?)aws_backup_' +service/batch: + - '((\*|-) ?`?|(data|resource) "?)aws_batch_' +service/budgets: + - '((\*|-) ?`?|(data|resource) "?)aws_budgets_' +service/chime: + - '((\*|-) ?`?|(data|resource) "?)aws_chime_' +service/cloud9: + - '((\*|-) ?`?|(data|resource) "?)aws_cloud9_' +service/clouddirectory: + - '((\*|-) ?`?|(data|resource) "?)aws_clouddirectory_' +service/cloudformation: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudformation_' +service/cloudfront: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudfront_' +service/cloudhsmv2: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudhsm_v2_' +service/cloudsearch: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudsearch_' +service/cloudtrail: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudtrail' +service/cloudwatch: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudwatch_(?!(event_|log_|query_))' +service/cloudwatchevents: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudwatch_event_' +service/cloudwatchlogs: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudwatch_(log_|query_)' +service/codeartifact: + - '((\*|-) ?`?|(data|resource) "?)aws_codeartifact_' +service/codebuild: + - '((\*|-) ?`?|(data|resource) "?)aws_codebuild_' +service/codecommit: + - '((\*|-) ?`?|(data|resource) "?)aws_codecommit_' +service/codedeploy: + - '((\*|-) ?`?|(data|resource) "?)aws_codedeploy_' +service/codepipeline: + - '((\*|-) ?`?|(data|resource) "?)aws_codepipeline' +service/codestar: + - '((\*|-) ?`?|(data|resource) "?)aws_codestar_' +service/codestarconnections: + - '((\*|-) ?`?|(data|resource) "?)aws_codestarconnections_' +service/codestarnotifications: + - '((\*|-) ?`?|(data|resource) "?)aws_codestarnotifications_' +service/cognito: + - '((\*|-) ?`?|(data|resource) "?)aws_cognito_' +service/configservice: + - '((\*|-) ?`?|(data|resource) "?)aws_config_' +service/connect: + - '((\*|-) ?`?|(data|resource) "?)aws_connect_' +service/databasemigrationservice: + - '((\*|-) ?`?|(data|resource) "?)aws_dms_' +service/dataexchange: + - '((\*|-) ?`?|(data|resource) "?)aws_dataexchange_' +service/datapipeline: + - '((\*|-) ?`?|(data|resource) "?)aws_datapipeline_' +service/datasync: + - '((\*|-) ?`?|(data|resource) "?)aws_datasync_' +service/dax: + - '((\*|-) ?`?|(data|resource) "?)aws_dax_' +service/detective: + - '((\*|-) ?`?|(data|resource) "?)aws_detective' +service/devicefarm: + - '((\*|-) ?`?|(data|resource) "?)aws_devicefarm_' +service/directconnect: + - '((\*|-) ?`?|(data|resource) "?)aws_dx_' +service/directoryservice: + - '((\*|-) ?`?|(data|resource) "?)aws_directory_service_' +service/dlm: + - '((\*|-) ?`?|(data|resource) "?)aws_dlm_' +service/docdb: + - '((\*|-) ?`?|(data|resource) "?)aws_docdb_' +service/dynamodb: + - '((\*|-) ?`?|(data|resource) "?)aws_dynamodb_' +service/ec2: + - '((\*|-) ?`?|(data|resource) "?)aws_(ami|availability_zone|customer_gateway|(default_)?(network_acl|route_table|security_group|subnet|vpc)|ebs_|ec2_|egress_only_internet_gateway|eip|flow_log|instance|internet_gateway|key_pair|launch_template|main_route_table_association|network_interface|placement_group|prefix_list|spot|route(\"|`|$)|vpn_|volume_attachment)' +service/ecr: + - '((\*|-) ?`?|(data|resource) "?)aws_ecr_' +service/ecrpublic: + - '((\*|-) ?`?|(data|resource) "?)aws_ecrpublic_' +service/ecs: + - '((\*|-) ?`?|(data|resource) "?)aws_ecs_' +service/efs: + - '((\*|-) ?`?|(data|resource) "?)aws_efs_' +service/eks: + - '((\*|-) ?`?|(data|resource) "?)aws_eks_' +elastic-transcoder: + - '((\*|-) ?`?|(data|resource) "?)aws_elastictranscoder_' +service/elasticache: + - '((\*|-) ?`?|(data|resource) "?)aws_elasticache_' +service/elasticbeanstalk: + - '((\*|-) ?`?|(data|resource) "?)aws_elastic_beanstalk_' +service/elasticsearch: + - '((\*|-) ?`?|(data|resource) "?)aws_elasticsearch_' +service/elb: + - '((\*|-) ?`?|(data|resource) "?)aws_(app_cookie_stickiness_policy|elb|lb_cookie_stickiness_policy|lb_ssl_negotiation_policy|load_balancer_|proxy_protocol_policy)' +service/elbv2: + - '((\*|-) ?`?|(data|resource) "?)aws_(a)?lb(\"|`|_listener|_target_group|$)' +service/emr: + - '((\*|-) ?`?|(data|resource) "?)aws_emr_' +service/emrcontainers: + - '((\*|-) ?`?|(data|resource) "?)aws_emrcontainers_' +service/eventbridge: + - '((\*|-) ?`?|(data|resource) "?)aws_cloudwatch_event_' +service/firehose: + - '((\*|-) ?`?|(data|resource) "?)aws_kinesis_firehose_' +service/fms: + - '((\*|-) ?`?|(data|resource) "?)aws_fms_' +service/forecast: + - '((\*|-) ?`?|(data|resource) "?)aws_forecast_' +service/fsx: + - '((\*|-) ?`?|(data|resource) "?)aws_fsx_' +service/gamelift: + - '((\*|-) ?`?|(data|resource) "?)aws_gamelift_' +service/glacier: + - '((\*|-) ?`?|(data|resource) "?)aws_glacier_' +service/globalaccelerator: + - '((\*|-) ?`?|(data|resource) "?)aws_globalaccelerator_' +service/glue: + - '((\*|-) ?`?|(data|resource) "?)aws_glue_' +service/greengrass: + - '((\*|-) ?`?|(data|resource) "?)aws_greengrass_' +service/guardduty: + - '((\*|-) ?`?|(data|resource) "?)aws_guardduty_' +service/iam: + - '((\*|-) ?`?|(data|resource) "?)aws_iam_' +service/identitystore: + - '((\*|-) ?`?|(data|resource) "?)aws_identitystore_' +service/imagebuilder: + - '((\*|-) ?`?|(data|resource) "?)aws_imagebuilder_' +service/inspector: + - '((\*|-) ?`?|(data|resource) "?)aws_inspector_' +service/iot: + - '((\*|-) ?`?|(data|resource) "?)aws_iot_' +service/iotanalytics: + - '((\*|-) ?`?|(data|resource) "?)aws_iotanalytics_' +service/iotevents: + - '((\*|-) ?`?|(data|resource) "?)aws_iotevents_' +service/kafka: + - '((\*|-) ?`?|(data|resource) "?)aws_msk_' +service/kinesis: + - '((\*|-) ?`?|(data|resource) "?)aws_kinesis_stream' +service/kinesisanalytics: + - '((\*|-) ?`?|(data|resource) "?)aws_kinesis_analytics_' +service/kinesisanalyticsv2: + - '((\*|-) ?`?|(data|resource) "?)aws_kinesisanalyticsv2_' +service/kms: + - '((\*|-) ?`?|(data|resource) "?)aws_kms_' +service/lakeformation: + - '((\*|-) ?`?|(data|resource) "?)aws_lakeformation_' +service/lambda: + - '((\*|-) ?`?|(data|resource) "?)aws_lambda_' +service/lexmodelbuildingservice: + - '((\*|-) ?`?|(data|resource) "?)aws_lex_' +service/licensemanager: + - '((\*|-) ?`?|(data|resource) "?)aws_licensemanager_' +service/lightsail: + - '((\*|-) ?`?|(data|resource) "?)aws_lightsail_' +service/location: + - '((\*|-) ?`?|(data|resource) "?)aws_location_' +service/machinelearning: + - '((\*|-) ?`?|(data|resource) "?)aws_machinelearning_' +service/macie: + - '((\*|-) ?`?|(data|resource) "?)aws_macie_' +service/macie2: + - '((\*|-) ?`?|(data|resource) "?)aws_macie2_' +service/marketplacecatalog: + - '((\*|-) ?`?|(data|resource) "?)aws_marketplace_catalog_' +service/mediaconnect: + - '((\*|-) ?`?|(data|resource) "?)aws_media_connect_' +service/mediaconvert: + - '((\*|-) ?`?|(data|resource) "?)aws_media_convert_' +service/medialive: + - '((\*|-) ?`?|(data|resource) "?)aws_media_live_' +service/mediapackage: + - '((\*|-) ?`?|(data|resource) "?)aws_media_package_' +service/mediastore: + - '((\*|-) ?`?|(data|resource) "?)aws_media_store_' +service/mediatailor: + - '((\*|-) ?`?|(data|resource) "?)aws_media_tailor_' +service/mobile: + - '((\*|-) ?`?|(data|resource) "?)aws_mobile_' +service/mq: + - '((\*|-) ?`?|(data|resource) "?)aws_mq_' +service/mwaa: + - '((\*|-) ?`?|(data|resource) "?)aws_mwaa_' +service/neptune: + - '((\*|-) ?`?|(data|resource) "?)aws_neptune_' +service/networkfirewall: + - '((\*|-) ?`?|(data|resource) "?)aws_networkfirewall_' +service/networkmanager: + - '((\*|-) ?`?|(data|resource) "?)aws_networkmanager_' +service/opsworks: + - '((\*|-) ?`?|(data|resource) "?)aws_opsworks_' +service/organizations: + - '((\*|-) ?`?|(data|resource) "?)aws_organizations_' +service/outposts: + - '((\*|-) ?`?|(data|resource) "?)aws_outposts_' +service/personalize: + - '((\*|-) ?`?|(data|resource) "?)aws_personalize_' +service/pinpoint: + - '((\*|-) ?`?|(data|resource) "?)aws_pinpoint_' +service/polly: + - '((\*|-) ?`?|(data|resource) "?)aws_polly_' +service/pricing: + - '((\*|-) ?`?|(data|resource) "?)aws_pricing_' +service/prometheusservice: + - '((\*|-) ?`?|(data|resource) "?)aws_prometheus_' +service/qldb: + - '((\*|-) ?`?|(data|resource) "?)aws_qldb_' +service/quicksight: + - '((\*|-) ?`?|(data|resource) "?)aws_quicksight_' +service/ram: + - '((\*|-) ?`?|(data|resource) "?)aws_ram_' +service/rds: + - '((\*|-) ?`?|(data|resource) "?)aws_(db_|rds_)' +service/redshift: + - '((\*|-) ?`?|(data|resource) "?)aws_redshift_' +service/resourcegroups: + - '((\*|-) ?`?|(data|resource) "?)aws_resourcegroups_' +service/resourcegroupstaggingapi: + - '((\*|-) ?`?|(data|resource) "?)aws_resourcegroupstaggingapi_' +service/robomaker: + - '((\*|-) ?`?|(data|resource) "?)aws_robomaker_' +service/route53: + - '((\*|-) ?`?|(data|resource) "?)aws_route53_(?!resolver_)' +service/route53domains: + - '((\*|-) ?`?|(data|resource) "?)aws_route53domains_' +service/route53resolver: + - '((\*|-) ?`?|(data|resource) "?)aws_route53_resolver_' +service/s3: + - '((\*|-) ?`?|(data|resource) "?)aws_(canonical_user_id|s3_bucket|s3_object)' +service/s3control: + - '((\*|-) ?`?|(data|resource) "?)aws_(s3_account_|s3control_)' +service/s3outposts: + - '((\*|-) ?`?|(data|resource) "?)aws_s3outposts_' +service/sagemaker: + - '((\*|-) ?`?|(data|resource) "?)aws_sagemaker_' +service/schemas: + - '((\*|-) ?`?|(data|resource) "?)aws_schemas_' +service/secretsmanager: + - '((\*|-) ?`?|(data|resource) "?)aws_secretsmanager_' +service/securityhub: + - '((\*|-) ?`?|(data|resource) "?)aws_securityhub_' +service/serverlessapplicationrepository: + - '((\*|-) ?`?|(data|resource) "?)aws_serverlessapplicationrepository_' +service/servicecatalog: + - '((\*|-) ?`?|(data|resource) "?)aws_servicecatalog_' +service/servicediscovery: + - '((\*|-) ?`?|(data|resource) "?)aws_service_discovery_' +service/servicequotas: + - '((\*|-) ?`?|(data|resource) "?)aws_servicequotas_' +service/ses: + - '((\*|-) ?`?|(data|resource) "?)aws_ses_' +service/sfn: + - '((\*|-) ?`?|(data|resource) "?)aws_sfn_' +service/shield: + - '((\*|-) ?`?|(data|resource) "?)aws_shield_' +service/signer: + - '((\*|-) ?`?|(data|resource) "?)aws_signer_' +service/simpledb: + - '((\*|-) ?`?|(data|resource) "?)aws_simpledb_' +service/snowball: + - '((\*|-) ?`?|(data|resource) "?)aws_snowball_' +service/sns: + - '((\*|-) ?`?|(data|resource) "?)aws_sns_' +service/sqs: + - '((\*|-) ?`?|(data|resource) "?)aws_sqs_' +service/ssm: + - '((\*|-) ?`?|(data|resource) "?)aws_ssm_' +service/ssoadmin: + - '((\*|-) ?`?|(data|resource) "?)aws_ssoadmin_' +service/storagegateway: + - '((\*|-) ?`?|(data|resource) "?)aws_storagegateway_' +service/sts: + - '((\*|-) ?`?|(data|resource) "?)aws_caller_identity' +service/swf: + - '((\*|-) ?`?|(data|resource) "?)aws_swf_' +service/synthetics: + - '((\*|-) ?`?|(data|resource) "?)aws_synthetics_' +service/timestreamwrite: + - '((\*|-) ?`?|(data|resource) "?)aws_timestreamwrite_' +service/transfer: + - '((\*|-) ?`?|(data|resource) "?)aws_transfer_' +service/waf: + - '((\*|-) ?`?|(data|resource) "?)aws_waf(regional)?_' +service/wafv2: + - '((\*|-) ?`?|(data|resource) "?)aws_wafv2_' +service/workdocs: + - '((\*|-) ?`?|(data|resource) "?)aws_workdocs_' +service/worklink: + - '((\*|-) ?`?|(data|resource) "?)aws_worklink_' +service/workmail: + - '((\*|-) ?`?|(data|resource) "?)aws_workmail_' +service/workspaces: + - '((\*|-) ?`?|(data|resource) "?)aws_workspaces_' +service/xray: + - '((\*|-) ?`?|(data|resource) "?)aws_xray_' diff --git a/.github/labeler-pr-triage.yml b/.github/labeler-pr-triage.yml new file mode 100644 index 00000000000..7c03fe24038 --- /dev/null +++ b/.github/labeler-pr-triage.yml @@ -0,0 +1,822 @@ +dependencies: + - '.github/dependabot.yml' +documentation: + - 'docs/**/*' + - 'website/**/*' + - '*.md' +examples: + - 'examples/**/*' +provider: + - '*.md' + - '.github/**/*' + - '.gitignore' + - '.go-version' + - 'aws/auth_helpers.go' + - 'aws/awserr.go' + - 'aws/config.go' + - 'aws/*_aws_arn*' + - 'aws/*_aws_ip_ranges*' + - 'aws/*_aws_partition*' + - 'aws/*_aws_region*' + - 'aws/internal/flatmap/**/*' + - 'aws/internal/keyvaluetags/**/*' + - 'aws/internal/naming/**/*' + - 'aws/provider.go' + - 'aws/utils.go' + - 'docs/*.md' + - 'docs/contributing/**/*' + - 'GNUmakefile' + - 'infrastructure/**/*' + - 'main.go' + - 'website/docs/index.html.markdown' + - 'website/**/arn*' + - 'website/**/ip_ranges*' + - 'website/**/partition*' + - 'website/**/region*' +service/accessanalyzer: + - 'aws/internal/service/accessanalyzer/**/*' + - '**/*_accessanalyzer_*' + - '**/accessanalyzer_*' +service/acm: + - 'aws/internal/service/acm/**/*' + - '**/*_acm_*' + - '**/acm_*' +service/acmpca: + - 'aws/internal/service/acmpca/**/*' + - '**/*_acmpca_*' + - '**/acmpca_*' +service/alexaforbusiness: + - 'aws/internal/service/alexaforbusiness/**/*' + - '**/*_alexaforbusiness_*' + - '**/alexaforbusiness_*' +service/amplify: + - 'aws/internal/service/amplify/**/*' + - '**/*_amplify_*' + - '**/amplify_*' +service/apigateway: + - 'aws/internal/service/apigateway/**/*' + - '**/*_api_gateway_[^v][^2][^_]*' + - '**/*_api_gateway_vpc_link*' + - '**/api_gateway_[^v][^2][^_]*' + - '**/api_gateway_vpc_link*' +service/apigatewayv2: + - 'aws/internal/service/apigatewayv2/**/*' + - '**/*_api_gateway_v2_*' + - '**/*_apigatewayv2_*' + - '**/api_gateway_v2_*' + - '**/apigatewayv2_*' +service/appconfig: + - 'aws/internal/service/appconfig/**/*' + - '**/*_appconfig_*' + - '**/appconfig_*' +service/applicationautoscaling: + - 'aws/internal/service/applicationautoscaling/**/*' + - '**/*_appautoscaling_*' + - '**/appautoscaling_*' +service/applicationinsights: + - 'aws/internal/service/applicationinsights/**/*' + - '**/*_applicationinsights_*' + - '**/applicationinsights_*' +service/appmesh: + - 'aws/internal/service/appmesh/**/*' + - '**/*_appmesh_*' + - '**/appmesh_*' +service/apprunner: + - 'aws/internal/service/apprunner/**/*' + - '**/*_apprunner_*' + - '**/apprunner_*' +service/appstream: + - 'aws/internal/service/appstream/**/*' + - '**/*_appstream_*' + - '**/appstream_*' +service/appsync: + - 'aws/internal/service/appsync/**/*' + - '**/*_appsync_*' + - '**/appsync_*' +service/athena: + - 'aws/internal/service/athena/**/*' + - '**/*_athena_*' + - '**/athena_*' +service/auditmanager: + - 'aws/internal/service/auditmanager/**/*' + - '**/*_auditmanager_*' + - '**/auditmanager_*' +service/autoscaling: + - 'aws/internal/service/autoscaling/**/*' + - '**/*_autoscaling_*' + - '**/autoscaling_*' + - 'aws/*_aws_launch_configuration*' + - 'website/**/launch_configuration*' +service/autoscalingplans: + - 'aws/internal/service/autoscalingplans/**/*' + - '**/*_autoscalingplans_*' + - '**/autoscalingplans_*' +service/backup: + - 'aws/internal/service/backup/**/*' + - '**/*backup_*' + - '**/backup_*' +service/batch: + - 'aws/internal/service/batch/**/*' + - '**/*_batch_*' + - '**/batch_*' +service/budgets: + - 'aws/internal/service/budgets/**/*' + - '**/*_budgets_*' + - '**/budgets_*' +service/chime: + - 'aws/internal/service/chime/**/*' + - '**/*_chime_*' + - '**/chime_*' +service/cloud9: + - 'aws/internal/service/cloud9/**/*' + - '**/*_cloud9_*' + - '**/cloud9_*' +service/clouddirectory: + - 'aws/internal/service/clouddirectory/**/*' + - '**/*_clouddirectory_*' + - '**/clouddirectory_*' +service/cloudformation: + - 'aws/internal/service/cloudformation/**/*' + - '**/*_cloudformation_*' + - '**/cloudformation_*' +service/cloudfront: + - 'aws/internal/service/cloudfront/**/*' + - '**/*_cloudfront_*' + - '**/cloudfront_*' +service/cloudhsmv2: + - 'aws/internal/service/cloudhsmv2/**/*' + - '**/*_cloudhsm_v2_*' + - '**/cloudhsm_v2_*' +service/cloudsearch: + - 'aws/internal/service/cloudsearch/**/*' + - '**/*_cloudsearch_*' + - '**/cloudsearch_*' +service/cloudtrail: + - 'aws/internal/service/cloudtrail/**/*' + - '**/*_cloudtrail*' + - '**/cloudtrail*' +service/cloudwatch: + - 'aws/internal/service/cloudwatch/**/*' + - '**/*_cloudwatch_dashboard*' + - '**/*_cloudwatch_metric_alarm*' + - '**/cloudwatch_dashboard*' + - '**/cloudwatch_metric_alarm*' +service/cloudwatchevents: + - 'aws/internal/service/cloudwatchevents/**/*' + - '**/*_cloudwatch_event_*' + - '**/cloudwatch_event_*' +service/cloudwatchlogs: + - 'aws/internal/service/cloudwatchlogs/**/*' + - '**/*_cloudwatch_log_*' + - '**/cloudwatch_log_*' + - '**/*_cloudwatch_query_definition*' + - '**/cloudwatch_query_definition*' +service/codeartifact: + - 'aws/internal/service/codeartifact/**/*' + - '**/*_codeartifact_*' + - '**/codeartifact_*' +service/codebuild: + - 'aws/internal/service/codebuild/**/*' + - '**/*_codebuild_*' + - '**/codebuild_*' +service/codecommit: + - 'aws/internal/service/codecommit/**/*' + - '**/*_codecommit_*' + - '**/codecommit_*' +service/codedeploy: + - 'aws/internal/service/codedeploy/**/*' + - '**/*_codedeploy_*' + - '**/codedeploy_*' +service/codepipeline: + - 'aws/internal/service/codepipeline/**/*' + - '**/*_codepipeline_*' + - '**/codepipeline_*' +service/codestar: + - 'aws/internal/service/codestar/**/*' + - '**/*_codestar_*' + - '**/codestar_*' +service/codestarconnections: + - 'aws/internal/service/codestarconnections/**/*' + - '**/*_codestarconnections_*' + - '**/codestarconnections_*' +service/codestarnotifications: + - 'aws/internal/service/codestarnotifications/**/*' + - '**/*_codestarnotifications_*' + - '**/codestarnotifications_*' +service/cognito: + - 'aws/internal/service/cognitoidentity/**/*' + - 'aws/internal/service/cognitoidentityprovider/**/*' + - '**/*_cognito_*' + - '**/cognito_*' +service/comprehend: + - 'aws/internal/service/comprehend/**/*' + - '**/*_comprehend_*' + - '**/comprehend_*' +service/configservice: + - 'aws/internal/service/configservice/**/*' + - 'aws/*_aws_config_*' + - 'website/**/config_*' +service/connect: + - 'aws/internal/service/connect/**/*' + - 'aws/*_aws_connect_*' + - 'website/**/connect_*' +service/costandusagereportservice: + - 'aws/internal/service/costandusagereportservice/**/*' + - 'aws/*_aws_cur_*' + - 'website/**/cur_*' +service/databasemigrationservice: + - 'aws/internal/service/databasemigrationservice/**/*' + - '**/*_dms_*' + - '**/dms_*' +service/dataexchange: + - 'aws/internal/service/dataexchange/**/*' + - '**/*_dataexchange_*' + - '**/dataexchange_*' +service/datapipeline: + - 'aws/internal/service/datapipeline/**/*' + - '**/*_datapipeline_*' + - '**/datapipeline_*' +service/datasync: + - 'aws/internal/service/datasync/**/*' + - '**/*_datasync_*' + - '**/datasync_*' +service/dax: + - 'aws/internal/service/dax/**/*' + - '**/*_dax_*' + - '**/dax_*' +service/detective: + - 'aws/internal/service/detective/**/*' + - '**/*_detective_*' + - '**/detective_*' +service/devicefarm: + - 'aws/internal/service/devicefarm/**/*' + - '**/*_devicefarm_*' + - '**/devicefarm_*' +service/directconnect: + - 'aws/internal/service/directconnect/**/*' + - '**/*_dx_*' + - '**/dx_*' +service/directoryservice: + - 'aws/internal/service/directoryservice/**/*' + - '**/*_directory_service_*' + - '**/directory_service_*' +service/dlm: + - 'aws/internal/service/dlm/**/*' + - '**/*_dlm_*' + - '**/dlm_*' +service/docdb: + - 'aws/internal/service/docdb/**/*' + - '**/*_docdb_*' + - '**/docdb_*' +service/dynamodb: + - 'aws/internal/service/dynamodb/**/*' + - '**/*_dynamodb_*' + - '**/dynamodb_*' + # Special casing this one because the files aren't _ec2_ +service/ec2: + - 'aws/internal/service/ec2/**/*' + - '**/*_ec2_*' + - '**/ec2_*' + - 'aws/*_aws_ami*' + - 'aws/*_aws_availability_zone*' + - 'aws/*_aws_customer_gateway*' + - 'aws/*_aws_default_network_acl*' + - 'aws/*_aws_default_route_table*' + - 'aws/*_aws_default_security_group*' + - 'aws/*_aws_default_subnet*' + - 'aws/*_aws_default_vpc*' + - 'aws/*_aws_ebs_*' + - 'aws/*_aws_egress_only_internet_gateway*' + - 'aws/*_aws_eip*' + - 'aws/*_aws_flow_log*' + - 'aws/*_aws_instance*' + - 'aws/*_aws_internet_gateway*' + - 'aws/*_aws_key_pair*' + - 'aws/*_aws_launch_template*' + - 'aws/*_aws_main_route_table_association*' + - 'aws/*_aws_nat_gateway*' + - 'aws/*_aws_network_acl*' + - 'aws/*_aws_network_interface*' + - 'aws/*_aws_placement_group*' + - 'aws/*_aws_prefix_list*' + - 'aws/*_aws_route_table*' + - 'aws/*_aws_route.*' + - 'aws/*_aws_security_group*' + - 'aws/*_aws_snapshot_create_volume_permission*' + - 'aws/*_aws_spot*' + - 'aws/*_aws_subnet*' + - 'aws/*_aws_vpc*' + - 'aws/*_aws_vpn*' + - 'aws/*_aws_volume_attachment*' + - 'website/**/availability_zone*' + - 'website/**/customer_gateway*' + - 'website/**/default_network_acl*' + - 'website/**/default_route_table*' + - 'website/**/default_security_group*' + - 'website/**/default_subnet*' + - 'website/**/default_vpc*' + - 'website/**/ebs_*' + - 'website/**/egress_only_internet_gateway*' + - 'website/**/eip*' + - 'website/**/flow_log*' + - 'website/**/instance*' + - 'website/**/internet_gateway*' + - 'website/**/key_pair*' + - 'website/**/launch_template*' + - 'website/**/main_route_table_association*' + - 'website/**/nat_gateway*' + - 'website/**/network_acl*' + - 'website/**/network_interface*' + - 'website/**/placement_group*' + - 'website/**/prefix_list*' + - 'website/**/route_table*' + - 'website/**/route.*' + - 'website/**/security_group*' + - 'website/**/snapshot_create_volume_permission*' + - 'website/**/spot_*' + - 'website/**/subnet*' + - 'website/**/vpc*' + - 'website/**/vpn*' + - 'website/**/volume_attachment*' +service/ecr: + - 'aws/internal/service/ecr/**/*' + - '**/*_ecr_*' + - '**/ecr_*' +service/ecrpublic: + - 'aws/internal/service/ecrpublic/**/*' + - '**/*_ecrpublic_*' + - '**/ecrpublic_*' +service/ecs: + - 'aws/internal/service/ecs/**/*' + - '**/*_ecs_*' + - '**/ecs_*' +service/efs: + - 'aws/internal/service/efs/**/*' + - '**/*_efs_*' + - '**/efs_*' +service/eks: + - 'aws/internal/service/eks/**/*' + - '**/*_eks_*' + - '**/eks_*' +service/elastic-transcoder: + - 'aws/internal/service/elastictranscoder/**/*' + - '**/*_elastictranscoder_*' + - '**/elastictranscoder_*' + - '**/*_elastic_transcoder_*' + - '**/elastic_transcoder_*' +service/elasticache: + - 'aws/internal/service/elasticache/**/*' + - '**/*_elasticache_*' + - '**/elasticache_*' +service/elasticbeanstalk: + - 'aws/internal/service/elasticbeanstalk/**/*' + - '**/*_elastic_beanstalk_*' + - '**/elastic_beanstalk_*' +service/elasticsearch: + - 'aws/internal/service/elasticsearchservice/**/*' + - '**/*_elasticsearch_*' + - '**/elasticsearch_*' + - '**/*_elasticsearchservice*' +service/elb: + - 'aws/internal/service/elb/**/*' + - 'aws/*_aws_app_cookie_stickiness_policy*' + - 'aws/*_aws_elb*' + - 'aws/*_aws_lb_cookie_stickiness_policy*' + - 'aws/*_aws_lb_ssl_negotiation_policy*' + - 'aws/*_aws_load_balancer*' + - 'aws/*_aws_proxy_protocol_policy*' + - 'website/**/app_cookie_stickiness_policy*' + - 'website/**/elb*' + - 'website/**/lb_cookie_stickiness_policy*' + - 'website/**/lb_ssl_negotiation_policy*' + - 'website/**/load_balancer*' + - 'website/**/proxy_protocol_policy*' +service/elbv2: + - 'aws/internal/service/elbv2/**/*' + - 'aws/*_lb.*' + - 'aws/*_lb_listener*' + - 'aws/*_lb_target_group*' + - 'website/**/lb.*' + - 'website/**/lb_listener*' + - 'website/**/lb_target_group*' +service/emr: + - 'aws/internal/service/emr/**/*' + - '**/*_emr_*' + - '**/emr_*' +service/emrcontainers: + - 'aws/internal/service/emrcontainers/**/*' + - '**/*_emrcontainers_*' + - '**/emrcontainers_*' +service/eventbridge: + # EventBridge is rebranded CloudWatch Events + - 'aws/internal/service/cloudwatchevents/**/*' + - '**/*_cloudwatch_event_*' + - '**/cloudwatch_event_*' +service/firehose: + - 'aws/internal/service/firehose/**/*' + - '**/*_firehose_*' + - '**/firehose_*' +service/fms: + - 'aws/internal/service/fms/**/*' + - '**/*_fms_*' + - '**/fms_*' +service/fsx: + - 'aws/internal/service/fsx/**/*' + - '**/*_fsx_*' + - '**/fsx_*' +service/gamelift: + - 'aws/internal/service/gamelift/**/*' + - '**/*_gamelift_*' + - '**/gamelift_*' +service/glacier: + - 'aws/internal/service/glacier/**/*' + - '**/*_glacier_*' + - '**/glacier_*' +service/globalaccelerator: + - 'aws/internal/service/globalaccelerator/**/*' + - '**/*_globalaccelerator_*' + - '**/globalaccelerator_*' +service/glue: + - 'aws/internal/service/glue/**/*' + - '**/*_glue_*' + - '**/glue_*' +service/greengrass: + - 'aws/internal/service/greengrass/**/*' + - '**/*_greengrass_*' + - '**/greengrass_*' +service/guardduty: + - 'aws/internal/service/guardduty/**/*' + - '**/*_guardduty_*' + - '**/guardduty_*' +service/iam: + - 'aws/internal/service/iam/**/*' + - '**/*_iam_*' + - '**/iam_*' +service/identitystore: + - 'aws/internal/service/identitystore/**/*' + - '**/*_identitystore_*' + - '**/identitystore_*' +service/imagebuilder: + - 'aws/internal/service/imagebuilder/**/*' + - '**/*_imagebuilder_*' + - '**/imagebuilder_*' +service/inspector: + - 'aws/internal/service/inspector/**/*' + - '**/*_inspector_*' + - '**/inspector_*' +service/iot: + - 'aws/internal/service/iot/**/*' + - '**/*_iot_*' + - '**/iot_*' +service/iotanalytics: + - 'aws/internal/service/iotanalytics/**/*' + - '**/*_iotanalytics_*' + - '**/iotanalytics_*' +service/iotevents: + - 'aws/internal/service/iotevents/**/*' + - '**/*_iotevents_*' + - '**/iotevents_*' +service/kafka: + - 'aws/internal/service/kafka/**/*' + - '**/*_msk_*' + - '**/msk_*' +service/kinesis: + - 'aws/internal/service/kinesis/**/*' + - 'aws/*_aws_kinesis_stream*' + - 'website/kinesis_stream*' +service/kinesisanalytics: + - 'aws/internal/service/kinesisanalytics/**/*' + - '**/*_kinesis_analytics_*' + - '**/kinesis_analytics_*' +service/kinesisanalyticsv2: + - 'aws/internal/service/kinesisanalyticsv2/**/*' + - '**/*_kinesisanalyticsv2_*' + - '**/kinesisanalyticsv2_*' +service/kms: + - 'aws/internal/service/kms/**/*' + - '**/*_kms_*' + - '**/kms_*' +service/lakeformation: + - 'aws/internal/service/lakeformation/**/*' + - '**/*_lakeformation_*' + - '**/lakeformation_*' +service/lambda: + - 'aws/internal/service/lambda/**/*' + - '**/*_lambda_*' + - '**/lambda_*' +service/lexmodelbuildingservice: + - 'aws/internal/service/lexmodelbuildingservice/**/*' + - '**/*_lex_*' + - '**/lex_*' +service/licensemanager: + - 'aws/internal/service/licensemanager/**/*' + - '**/*_licensemanager_*' + - '**/licensemanager_*' +service/lightsail: + - 'aws/internal/service/lightsail/**/*' + - '**/*_lightsail_*' + - '**/lightsail_*' +service/location: + - 'aws/internal/service/location/**/*' + - '**/*_location_*' + - '**/location_*' +service/machinelearning: + - 'aws/internal/service/machinelearning/**/*' + - '**/*_machinelearning_*' + - '**/machinelearning_*' +service/macie: + - 'aws/internal/service/macie/**/*' + - '**/*_macie_*' + - '**/macie_*' +service/macie2: + - 'aws/internal/service/macie2/**/*' + - '**/*_macie2_*' + - '**/macie2_*' +service/marketplacecatalog: + - 'aws/internal/service/marketplacecatalog/**/*' + - '**/*_marketplace_catalog_*' + - '**/marketplace_catalog_*' +service/mediaconnect: + - 'aws/internal/service/mediaconnect/**/*' + - '**/*_media_connect_*' + - '**/media_connect_*' +service/mediaconvert: + - 'aws/internal/service/mediaconvert/**/*' + - '**/*_media_convert_*' + - '**/media_convert_*' +service/medialive: + - 'aws/internal/service/medialive/**/*' + - '**/*_media_live_*' + - '**/media_live_*' +service/mediapackage: + - 'aws/internal/service/mediapackage/**/*' + - '**/*_media_package_*' + - '**/media_package_*' +service/mediastore: + - 'aws/internal/service/mediastore/**/*' + - '**/*_media_store_*' + - '**/media_store_*' +service/mediatailor: + - 'aws/internal/service/mediatailor/**/*' + - '**/*_media_tailor_*' + - '**/media_tailor_*' +service/mobile: + - 'aws/internal/service/mobile/**/*' + - '**/*_mobile_*' + - '**/mobile_*' +service/mq: + - 'aws/internal/service/mq/**/*' + - '**/*_mq_*' + - '**/mq_*' +service/mwaa: + - 'aws/internal/service/mwaa/**/*' + - '**/*_mwaa_*' + - '**/mwaa_*' +service/neptune: + - 'aws/internal/service/neptune/**/*' + - '**/*_neptune_*' + - '**/neptune_*' +service/networkfirewall: + - 'aws/internal/service/networkfirewall/**/*' + - '**/*_networkfirewall_*' + - '**/networkfirewall_*' +service/networkmanager: + - 'aws/internal/service/networkmanager/**/*' + - '**/*_networkmanager_*' + - '**/networkmanager_*' +service/opsworks: + - 'aws/internal/service/opsworks/**/*' + - '**/*_opsworks_*' + - '**/opsworks_*' +service/organizations: + - 'aws/internal/service/organizations/**/*' + - '**/*_organizations_*' + - '**/organizations_*' +service/outposts: + - 'aws/internal/service/outposts/**/*' + - '**/*_outposts_*' + - '**/outposts_*' +service/pinpoint: + - 'aws/internal/service/pinpoint/**/*' + - '**/*_pinpoint_*' + - '**/pinpoint_*' +service/polly: + - 'aws/internal/service/polly/**/*' + - '**/*_polly_*' + - '**/polly_*' +service/pricing: + - 'aws/internal/service/pricing/**/*' + - '**/*_pricing_*' + - '**/pricing_*' +service/prometheusservice: + - 'aws/internal/service/prometheus/**/*' + - '**/*_prometheus_*' + - '**/prometheus_*' +service/qldb: + - 'aws/internal/service/qldb/**/*' + - '**/*_qldb_*' + - '**/qldb_*' +service/quicksight: + - 'aws/internal/service/quicksight/**/*' + - '**/*_quicksight_*' + - '**/quicksight_*' +service/ram: + - 'aws/internal/service/ram/**/*' + - '**/*_ram_*' + - '**/ram_*' +service/rds: + - 'aws/internal/service/rds/**/*' + - 'aws/*_aws_db_*' + - 'aws/*_aws_rds_*' + - 'website/**/db_*' + - 'website/**/rds_*' +service/redshift: + - 'aws/internal/service/redshift/**/*' + - '**/*_redshift_*' + - '**/redshift_*' +service/resourcegroups: + - 'aws/internal/service/resourcegroups/**/*' + - '**/*_resourcegroups_*' + - '**/resourcegroups_*' +service/resourcegroupstaggingapi: + - 'aws/internal/service/resourcegroupstaggingapi/**/*' + - '**/*_resourcegroupstaggingapi_*' + - '**/resourcegroupstaggingapi_*' +service/robomaker: + - 'aws/internal/service/robomaker/**/*' + - '**/*_robomaker_*' + - '**/robomaker_*' +service/route53: + - 'aws/internal/service/route53/**/*' + - '**/*_route53_delegation_set*' + - '**/*_route53_health_check*' + - '**/*_route53_query_log*' + - '**/*_route53_record*' + - '**/*_route53_vpc_association_authorization*' + - '**/*_route53_zone*' + - '**/route53_delegation_set*' + - '**/route53_health_check*' + - '**/route53_query_log*' + - '**/route53_record*' + - '**/route53_vpc_association_authorization*' + - '**/route53_zone*' +service/route53domains: + - 'aws/internal/service/route53domains/**/*' + - '**/*_route53domains_*' + - '**/route53domains_*' +service/route53resolver: + - 'aws/internal/service/route53resolver/**/*' + - '**/*_route53_resolver_*' + - '**/route53_resolver_*' +service/s3: + - 'aws/internal/service/s3/**/*' + - '**/*_s3_bucket*' + - '**/s3_bucket*' + - '**/*_s3_object*' + - '**/s3_object*' + - 'aws/*_aws_canonical_user_id*' + - 'website/**/canonical_user_id*' +service/s3control: + - 'aws/internal/service/s3control/**/*' + - '**/*_s3_account_*' + - '**/s3_account_*' + - '**/*_s3control_*' + - '**/s3control_*' +service/s3outposts: + - 'aws/internal/service/s3outposts/**/*' + - '**/*_s3outposts_*' + - '**/s3outposts_*' +service/sagemaker: + - 'aws/internal/service/sagemaker/**/*' + - '**/*_sagemaker_*' + - '**/sagemaker_*' +service/schemas: + - 'aws/internal/service/schemas/**/*' + - '**/*_schemas_*' + - '**/schemas_*' +service/secretsmanager: + - 'aws/internal/service/secretsmanager/**/*' + - '**/*_secretsmanager_*' + - '**/secretsmanager_*' +service/securityhub: + - 'aws/internal/service/securityhub/**/*' + - '**/*_securityhub_*' + - '**/securityhub_*' +service/serverlessapplicationrepository: + - 'aws/internal/service/serverlessapplicationrepository/**/*' + - '**/*_serverlessapplicationrepository_*' + - '**/serverlessapplicationrepository_*' +service/servicecatalog: + - 'aws/internal/service/servicecatalog/**/*' + - '**/*_servicecatalog_*' + - '**/servicecatalog_*' +service/servicediscovery: + - 'aws/internal/service/servicediscovery/**/*' + - '**/*_service_discovery_*' + - '**/service_discovery_*' +service/servicequotas: + - 'aws/internal/service/servicequotas/**/*' + - '**/*_servicequotas_*' + - '**/servicequotas_*' +service/ses: + - 'aws/internal/service/ses/**/*' + - '**/*_ses_*' + - '**/ses_*' +service/sfn: + - 'aws/internal/service/sfn/**/*' + - '**/*_sfn_*' + - '**/sfn_*' +service/shield: + - 'aws/internal/service/shield/**/*' + - '**/*_shield_*' + - '**/shield_*' +service/signer: + - '**/*_signer_*' + - '**/signer_*' +service/simpledb: + - 'aws/internal/service/simpledb/**/*' + - '**/*_simpledb_*' + - '**/simpledb_*' +service/snowball: + - 'aws/internal/service/snowball/**/*' + - '**/*_snowball_*' + - '**/snowball_*' +service/sns: + - 'aws/internal/service/sns/**/*' + - '**/*_sns_*' + - '**/sns_*' +service/sqs: + - 'aws/internal/service/sqs/**/*' + - '**/*_sqs_*' + - '**/sqs_*' +service/ssm: + - 'aws/internal/service/ssm/**/*' + - '**/*_ssm_*' + - '**/ssm_*' +service/ssoadmin: + - 'aws/internal/service/ssoadmin/**/*' + - '**/*_ssoadmin_*' + - '**/ssoadmin_*' +service/storagegateway: + - 'aws/internal/service/storagegateway/**/*' + - '**/*_storagegateway_*' + - '**/storagegateway_*' +service/sts: + - 'aws/internal/service/sts/**/*' + - 'aws/*_aws_caller_identity*' + - 'website/**/caller_identity*' +service/swf: + - 'aws/internal/service/swf/**/*' + - '**/*_swf_*' + - '**/swf_*' +service/synthetics: + - 'aws/internal/service/synthetics/**/*' + - '**/*_synthetics_*' + - '**/synthetics_*' +service/timestreamwrite: + - 'aws/internal/service/timestreamwrite/**/*' + - '**/*_timestreamwrite_*' + - '**/timestreamwrite_*' +service/transfer: + - 'aws/internal/service/transfer/**/*' + - '**/*_transfer_*' + - '**/transfer_*' +service/waf: + - 'aws/internal/service/waf/**/*' + - 'aws/internal/service/wafregional/**/*' + - '**/*_waf_*' + - '**/waf_*' + - '**/*_wafregional_*' + - '**/wafregional_*' +service/wafv2: + - 'aws/internal/service/wafv2/**/*' + - '**/*_wafv2_*' + - '**/wafv2_*' +service/workdocs: + - 'aws/internal/service/workdocs/**/*' + - '**/*_workdocs_*' + - '**/workdocs_*' +service/worklink: + - 'aws/internal/service/worklink/**/*' + - '**/*_worklink_*' + - '**/worklink_*' +service/workmail: + - 'aws/internal/service/workmail/**/*' + - '**/*_workmail_*' + - '**/workmail_*' +service/workspaces: + - 'aws/internal/service/workspaces/**/*' + - '**/*_workspaces_*' + - '**/workspaces_*' +service/xray: + - 'aws/internal/service/xray/**/*' + - '**/*_xray_*' + - '**/xray_*' +tests: + - '**/*_test.go' + - '**/testdata/**/*' + - '.github/workflows/*' + - '.golangci.yml' + - '.markdownlinkcheck.json' + - '.markdownlint.yml' + - 'staticcheck.conf' diff --git a/.github/workflows.disabled/pull_requests.yml b/.github/workflows.disabled/pull_requests.yml index 34c4ab61392..fbdc6f65c56 100644 --- a/.github/workflows.disabled/pull_requests.yml +++ b/.github/workflows.disabled/pull_requests.yml @@ -3,6 +3,15 @@ on: name: Pull Request Target (All types) jobs: + Labeler: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - name: Apply Labels + uses: actions/labeler@v3 + with: + configuration-path: .github/labeler-pr-triage.yml + repo-token: ${{ secrets.GITHUB_TOKEN }} NeedsTriageLabeler: runs-on: ubuntu-latest steps: diff --git a/.github/workflows/issue-comment-created.yml b/.github/workflows/issue-comment-created.yml new file mode 100644 index 00000000000..b8c4d6bfacc --- /dev/null +++ b/.github/workflows/issue-comment-created.yml @@ -0,0 +1,15 @@ +name: Issue Comment Created Triage + +on: + issue_comment: + types: [created] + +jobs: + issue_comment_triage: + runs-on: ubuntu-latest + steps: + - uses: actions-ecosystem/action-remove-labels@v1 + with: + labels: | + stale + waiting-reply diff --git a/.github/workflows/lock.yml b/.github/workflows/lock.yml new file mode 100644 index 00000000000..412aa5a4a66 --- /dev/null +++ b/.github/workflows/lock.yml @@ -0,0 +1,23 @@ +name: 'Lock Threads' + +on: + schedule: + - cron: '50 1 * * *' + +jobs: + lock: + runs-on: ubuntu-latest + steps: + - uses: dessant/lock-threads@v2 + with: + github-token: ${{ github.token }} + issue-lock-comment: > + I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues. + + If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. + issue-lock-inactive-days: '30' + pr-lock-comment: > + I'm going to lock this pull request because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues. + + If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. + pr-lock-inactive-days: '30' diff --git a/.hashibot.hcl b/.hashibot.hcl index 19a16a5cda6..b8d0b6e0377 100644 --- a/.hashibot.hcl +++ b/.hashibot.hcl @@ -1,75 +1,3 @@ -poll "closed_issue_locker" "locker" { - schedule = "0 10 17 * * *" - closed_for = "720h" # 30 days - max_issues = 500 - sleep_between_issues = "5s" - - message = <<-EOF - I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues. - - If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! - EOF -} - -behavior "deprecated_import_commenter" "hashicorp_terraform" { - import_regexp = "github.com/hashicorp/terraform/" - marker_label = "terraform-plugin-sdk-migration" - - message = <<-EOF - Hello, and thank you for your contribution! - - This project recently migrated to the [standalone Terraform Plugin SDK](https://www.terraform.io/docs/extend/plugin-sdk.html). While the migration helps speed up future feature requests and bug fixes to the Terraform AWS Provider's interface with Terraform, it has the unfortunate consequence of requiring minor changes to pull requests created using the old SDK. - - This pull request appears to include the Go import path `${var.import_path}`, which was from the older SDK. The newer SDK uses import paths beginning with `github.com/hashicorp/terraform-plugin-sdk/`. - - To resolve this situation without losing any existing work, you may be able to Git rebase your branch against the current default (main) branch (example below); replacing any remaining old import paths with the newer ones. - - ```console - $ git fetch --all - $ git rebase origin/main - ``` - - Another option is to create a new branch from the current default (main) with the same code changes (replacing the import paths), submit a new pull request, and close this existing pull request. - - We apologize for this inconvenience and appreciate your effort. Thank you for contributing and helping make the Terraform AWS Provider better for everyone. - EOF -} - -behavior "deprecated_import_commenter" "sdkv1" { - import_regexp = "github.com/hashicorp/terraform-plugin-sdk/(helper/(acctest|customdiff|logging|resource|schema|structure|validation)|terraform)" - marker_label = "terraform-plugin-sdk-v1" - - message = <<-EOF - Hello, and thank you for your contribution! - - This project recently upgraded to [V2 of the Terraform Plugin SDK](https://www.terraform.io/docs/extend/guides/v2-upgrade-guide.html) - - This pull request appears to include at least one V1 import path of the SDK (`${var.import_path}`). Please import the V2 path `github.com/hashicorp/terraform-plugin-sdk/v2/helper/PACKAGE` - - To resolve this situation without losing any existing work, you may be able to Git rebase your branch against the current default (main) branch (example below); replacing any remaining old import paths with the newer ones. - - ```console - $ git fetch --all - $ git rebase origin/main - ``` - - Another option is to create a new branch from the current default (main) with the same code changes (replacing the import paths), submit a new pull request, and close this existing pull request. - - We apologize for this inconvenience and appreciate your effort. Thank you for contributing and helping make the Terraform AWS Provider better for everyone. - EOF -} - -behavior "deprecated_import_commenter" "sdkv1_deprecated" { - import_regexp = "github.com/hashicorp/terraform-plugin-sdk/helper/(hashcode|mutexkv|encryption)" - marker_label = "terraform-plugin-sdk-v1" - - message = <<-EOF - Hello, and thank you for your contribution! - This pull request appears to include the Go import path `${var.import_path}`, which was deprecated after upgrading to [V2 of the Terraform Plugin SDK](https://www.terraform.io/docs/extend/guides/v2-upgrade-guide.html). - You may use a now internalized version of the package found in `github.com/terraform-providers/terraform-provider-aws/aws/internal/PACKAGE`. - EOF -} - queued_behavior "release_commenter" "releases" { repo_prefix = "terraform-provider-" @@ -80,1550 +8,6 @@ queued_behavior "release_commenter" "releases" { EOF } -# Catch the following in issues: -# *aws_XXX -# * aws_XXX -# * `aws_XXX` -# -aws_XXX -# - aws_XXX -# - `aws_XXX` -# data "aws_XXX" -# resource "aws_XXX" -# NOTE: Go regexp does not support negative lookaheads -behavior "regexp_issue_labeler_v2" "service_labels" { - regexp = "(\\* ?`?|- ?`?|data \"|resource \")aws_(\\w+)" - - label_map = { - "service/accessanalyzer" = [ - "aws_accessanalyzer_", - ], - "service/acm" = [ - "aws_acm_", - ], - "service/acmpca" = [ - "aws_acmpca_", - ], - "service/alexaforbusiness" = [ - "aws_alexaforbusiness_", - ], - "service/amplify" = [ - "aws_amplify_", - ], - "service/apigateway" = [ - # Catch aws_api_gateway_XXX but not aws_api_gateway_v2_ - "aws_api_gateway_([^v]|v[^2]|v2[^_])", - ], - "service/apigatewayv2" = [ - "aws_api_gateway_v2_", - "aws_apigatewayv2_", - ], - "service/appconfig" = [ - "aws_appconfig_", - ], - "service/applicationautoscaling" = [ - "aws_appautoscaling_", - ], - "service/applicationdiscoveryservice" = [ - "aws_applicationdiscoveryservice_", - ], - "service/applicationinsights" = [ - "aws_applicationinsights_", - ], - "service/appmesh" = [ - "aws_appmesh_", - ], - "service/apprunner" = [ - "aws_apprunner_", - ], - "service/appstream" = [ - "aws_appstream_", - ], - "service/appsync" = [ - "aws_appsync_", - ], - "service/athena" = [ - "aws_athena_", - ], - "service/auditmanager" = [ - "aws_auditmanager_", - ], - "service/autoscaling" = [ - "aws_autoscaling_", - "aws_launch_configuration", - ], - "service/autoscalingplans" = [ - "aws_autoscalingplans_", - ], - "service/backup" = [ - "aws_backup_", - ], - "service/batch" = [ - "aws_batch_", - ], - "service/budgets" = [ - "aws_budgets_", - ], - "service/cloud9" = [ - "aws_cloud9_", - ], - "service/clouddirectory" = [ - "aws_clouddirectory_", - ], - "service/cloudformation" = [ - "aws_cloudformation_", - ], - "service/cloudfront" = [ - "aws_cloudfront_", - ], - "service/cloudhsmv2" = [ - "aws_cloudhsm_v2_", - ], - "service/cloudsearch" = [ - "aws_cloudsearch_", - ], - "service/cloudtrail" = [ - "aws_cloudtrail", - ], - "service/cloudwatch" = [ - "aws_cloudwatch_([^e]|e[^v]|ev[^e]|eve[^n]|even[^t]|event[^_]|[^l]|l[^o]|lo[^g]|log[^_])", - ], - "service/cloudwatchevents" = [ - "aws_cloudwatch_event_", - ], - "service/cloudwatchlogs" = [ - "aws_cloudwatch_log_", - "aws_cloudwatch_query_definition", - ], - "service/codeartifact" = [ - "aws_codeartifact_", - ], - "service/codebuild" = [ - "aws_codebuild_", - ], - "service/codecommit" = [ - "aws_codecommit_", - ], - "service/codedeploy" = [ - "aws_codedeploy_", - ], - "service/codepipeline" = [ - "aws_codepipeline", - ], - "service/codestar" = [ - "aws_codestar_", - ], - "service/codestarconnections" = [ - "aws_codestarconnections_", - ], - "service/codestarnotifications" = [ - "aws_codestarnotifications_", - ], - "service/cognito" = [ - "aws_cognito_", - ], - "service/configservice" = [ - "aws_config_", - ], - "service/connect" = [ - "aws_connect_", - ], - "service/databasemigrationservice" = [ - "aws_dms_", - ], - "service/dataexchange" = [ - "aws_dataexchange_", - ], - "service/datapipeline" = [ - "aws_datapipeline_", - ], - "service/datasync" = [ - "aws_datasync_", - ], - "service/dax" = [ - "aws_dax_", - ], - "service/detective" = [ - "aws_detective_" - ], - "service/devicefarm" = [ - "aws_devicefarm_", - ], - "service/directconnect" = [ - "aws_dx_", - ], - "service/directoryservice" = [ - "aws_directory_service_", - ], - "service/dlm" = [ - "aws_dlm_", - ], - "service/docdb" = [ - "aws_docdb_", - ], - "service/dynamodb" = [ - "aws_dynamodb_", - ], - "service/ec2" = [ - "aws_ami", - "aws_availability_zone", - "aws_customer_gateway", - "aws_(default_)?(network_acl|route_table|security_group|subnet|vpc)", - "aws_ebs_", - "aws_ec2_", - "aws_egress_only_internet_gateway", - "aws_eip", - "aws_flow_log", - "aws_instance", - "aws_internet_gateway", - "aws_key_pair", - "aws_launch_template", - "aws_main_route_table_association", - "aws_network_interface", - "aws_placement_group", - "aws_prefix_list", - "aws_spot", - "aws_route(\"|`|$)", - "aws_vpn_", - "aws_volume_attachment", - ], - "service/ecr" = [ - "aws_ecr_", - ], - "service/ecrpublic" = [ - "aws_ecrpublic_", - ], - "service/ecs" = [ - "aws_ecs_", - ], - "service/efs" = [ - "aws_efs_", - ], - "service/eks" = [ - "aws_eks_", - ], - "service/elastic-transcoder" = [ - "aws_elastictranscoder_", - ], - "service/elasticache" = [ - "aws_elasticache_", - ], - "service/elasticbeanstalk" = [ - "aws_elastic_beanstalk_", - ], - "service/elasticsearch" = [ - "aws_elasticsearch_", - ], - "service/elb" = [ - "aws_app_cookie_stickiness_policy", - "aws_elb", - "aws_lb_cookie_stickiness_policy", - "aws_lb_ssl_negotiation_policy", - "aws_load_balancer_", - "aws_proxy_protocol_policy", - ], - "service/elbv2" = [ - "aws_(a)?lb(\"|`|$)", - # Catch aws_lb_XXX but not aws_lb_cookie_ or aws_lb_ssl_ (Classic ELB) - "aws_(a)?lb_([^c]|c[^o]|co[^o]|coo[^k]|cook[^i]|cooki[^e]|cookie[^_]|[^s]|s[^s]|ss[^l]|ssl[^_])", - ], - "service/emr" = [ - "aws_emr_", - ], - "service/emrcontainers" = [ - "aws_emrcontainers_", - ], - "service/eventbridge" = [ - # EventBridge is rebranded CloudWatch Events - "aws_cloudwatch_event_", - ], - "service/firehose" = [ - "aws_kinesis_firehose_", - ], - "service/fms" = [ - "aws_fms_", - ], - "service/forecast" = [ - "aws_forecast_", - ], - "service/fsx" = [ - "aws_fsx_", - ], - "service/gamelift" = [ - "aws_gamelift_", - ], - "service/glacier" = [ - "aws_glacier_", - ], - "service/globalaccelerator" = [ - "aws_globalaccelerator_", - ], - "service/glue" = [ - "aws_glue_", - ], - "service/greengrass" = [ - "aws_greengrass_", - ], - "service/guardduty" = [ - "aws_guardduty_", - ], - "service/iam" = [ - "aws_iam_", - ], - "service/identitystore" = [ - "aws_identitystore_", - ], - "service/imagebuilder" = [ - "aws_imagebuilder_", - ], - "service/inspector" = [ - "aws_inspector_", - ], - "service/iot" = [ - "aws_iot_", - ], - "service/iotanalytics" = [ - "aws_iotanalytics_", - ], - "service/iotevents" = [ - "aws_iotevents_", - ], - "service/kafka" = [ - "aws_msk_", - ], - "service/kinesis" = [ - # Catch aws_kinesis_XXX but not aws_kinesis_firehose_ - "aws_kinesis_([^f]|f[^i]|fi[^r]|fir[^e]|fire[^h]|fireh[^o]|fireho[^s]|firehos[^e]|firehose[^_])", - ], - "service/kinesisanalytics" = [ - "aws_kinesis_analytics_", - ], - "service/kinesisanalyticsv2" = [ - "aws_kinesisanalyticsv2_", - ], - "service/kms" = [ - "aws_kms_", - ], - "service/lakeformation" = [ - "aws_lakeformation_", - ], - "service/lambda" = [ - "aws_lambda_", - ], - "service/lexmodelbuildingservice" = [ - "aws_lex_", - ], - "service/licensemanager" = [ - "aws_licensemanager_", - ], - "service/lightsail" = [ - "aws_lightsail_", - ], - "service/machinelearning" = [ - "aws_machinelearning_", - ], - "service/macie" = [ - "aws_macie_", - ], - "service/macie2" = [ - "aws_macie2_", - ], - "service/marketplacecatalog" = [ - "aws_marketplace_catalog_", - ], - "service/mediaconnect" = [ - "aws_media_connect_", - ], - "service/mediaconvert" = [ - "aws_media_convert_", - ], - "service/medialive" = [ - "aws_media_live_", - ], - "service/mediapackage" = [ - "aws_media_package_", - ], - "service/mediastore" = [ - "aws_media_store_", - ], - "service/mediatailor" = [ - "aws_media_tailor_", - ], - "service/mobile" = [ - "aws_mobile_", - ], - "service/mq" = [ - "aws_mq_", - ], - "service/mwaa" = [ - "aws_mwaa_", - ], - "service/neptune" = [ - "aws_neptune_", - ], - "service/networkfirewall" = [ - "aws_networkfirewall_", - ], - "service/networkmanager" = [ - "aws_networkmanager_", - ], - "service/opsworks" = [ - "aws_opsworks_", - ], - "service/organizations" = [ - "aws_organizations_", - ], - "service/outposts" = [ - "aws_outposts_", - ], - "service/personalize" = [ - "aws_personalize_", - ], - "service/pinpoint" = [ - "aws_pinpoint_", - ], - "service/polly" = [ - "aws_polly_", - ], - "service/pricing" = [ - "aws_pricing_", - ], - "service/prometheusservice" = [ - "aws_prometheus_", - ], - "service/qldb" = [ - "aws_qldb_", - ], - "service/quicksight" = [ - "aws_quicksight_", - ], - "service/ram" = [ - "aws_ram_", - ], - "service/rds" = [ - "aws_db_", - "aws_rds_", - ], - "service/redshift" = [ - "aws_redshift_", - ], - "service/resourcegroups" = [ - "aws_resourcegroups_", - ], - "service/resourcegroupstaggingapi" = [ - "aws_resourcegroupstaggingapi_", - ], - "service/robomaker" = [ - "aws_robomaker_", - ], - "service/route53" = [ - # Catch aws_route53_XXX but not aws_route53_domains_ or aws_route53_resolver_ - "aws_route53_([^d]|d[^o]|do[^m]|dom[^a]|doma[^i]|domai[^n]|domain[^s]|domains[^_]|[^r]|r[^e]|re[^s]|res[^o]|reso[^l]|resol[^v]|resolv[^e]|resolve[^r]|resolver[^_])", - ], - "service/route53domains" = [ - "aws_route53domains_", - ], - "service/route53resolver" = [ - "aws_route53_resolver_", - ], - "service/s3" = [ - "aws_canonical_user_id", - "aws_s3_bucket", - "aws_s3_object", - ], - "service/s3control" = [ - "aws_s3_account_", - "aws_s3control_", - ], - "service/s3outposts" = [ - "aws_s3outposts_", - ], - "service/sagemaker" = [ - "aws_sagemaker_", - ], - "service/secretsmanager" = [ - "aws_secretsmanager_", - ], - "service/securityhub" = [ - "aws_securityhub_", - ], - "service/serverlessapplicationrepository" = [ - "aws_serverlessapplicationrepository_", - ], - "service/servicecatalog" = [ - "aws_servicecatalog_", - ], - "service/servicediscovery" = [ - "aws_service_discovery_", - ], - "service/servicequotas" = [ - "aws_servicequotas_", - ], - "service/ses" = [ - "aws_ses_", - ], - "service/sfn" = [ - "aws_sfn_", - ], - "service/shield" = [ - "aws_shield_", - ], - "service/signer" = [ - "aws_signer_", - ], - "service/simpledb" = [ - "aws_simpledb_", - ], - "service/snowball" = [ - "aws_snowball_", - ], - "service/sns" = [ - "aws_sns_", - ], - "service/sqs" = [ - "aws_sqs_", - ], - "service/ssm" = [ - "aws_ssm_", - ], - "service/ssoadmin" = [ - "aws_ssoadmin_", - ], - "service/storagegateway" = [ - "aws_storagegateway_", - ], - "service/sts" = [ - "aws_caller_identity", - ], - "service/swf" = [ - "aws_swf_", - ], - "service/synthetics" = [ - "aws_synthetics_", - ], - "service/timestreamwrite" = [ - "aws_timestreamwrite_", - ], - "service/transfer" = [ - "aws_transfer_", - ], - "service/waf" = [ - "aws_waf_", - "aws_wafregional_", - ], - "service/wafv2" = [ - "aws_wafv2_", - ], - "service/workdocs" = [ - "aws_workdocs_", - ], - "service/worklink" = [ - "aws_worklink_", - ], - "service/workmail" = [ - "aws_workmail_", - ], - "service/workspaces" = [ - "aws_workspaces_", - ], - "service/xray" = [ - "aws_xray_", - ], - } -} - -behavior "pull_request_path_labeler" "service_labels" { - label_map = { - # label provider related changes - "provider" = [ - "*.md", - ".github/**/*", - ".gitignore", - ".go-version", - ".hashibot.hcl", - "aws/auth_helpers.go", - "aws/awserr.go", - "aws/config.go", - "aws/*_aws_arn*", - "aws/*_aws_ip_ranges*", - "aws/*_aws_partition*", - "aws/*_aws_region*", - "aws/internal/flatmap/*", - "aws/internal/keyvaluetags/*", - "aws/internal/naming/*", - "aws/provider.go", - "aws/utils.go", - "docs/*.md", - "docs/contributing/**/*", - "GNUmakefile", - "infrastructure/**/*", - "main.go", - "website/docs/index.html.markdown", - "website/**/arn*", - "website/**/ip_ranges*", - "website/**/partition*", - "website/**/region*" - ] - "dependencies" = [ - ".github/dependabot.yml", - ] - "documentation" = [ - "docs/**/*", - "website/**/*", - "*.md", - ] - "examples" = [ - "examples/**/*", - ] - "tests" = [ - "**/*_test.go", - "**/testdata/**/*", - "**/test-fixtures/**/*", - ".github/workflows/*", - ".gometalinter.json", - ".markdownlinkcheck.json", - ".markdownlint.yml", - "staticcheck.conf" - ] - # label services - "service/accessanalyzer" = [ - "aws/internal/service/accessanalyzer/**/*", - "**/*_accessanalyzer_*", - "**/accessanalyzer_*" - ] - "service/acm" = [ - "aws/internal/service/acm/**/*", - "**/*_acm_*", - "**/acm_*" - ] - "service/acmpca" = [ - "aws/internal/service/acmpca/**/*", - "**/*_acmpca_*", - "**/acmpca_*" - ] - "service/alexaforbusiness" = [ - "aws/internal/service/alexaforbusiness/**/*", - "**/*_alexaforbusiness_*", - "**/alexaforbusiness_*" - ] - "service/amplify" = [ - "aws/internal/service/amplify/**/*", - "**/*_amplify_*", - "**/amplify_*" - ] - "service/apigateway" = [ - "aws/internal/service/apigateway/**/*", - "**/*_api_gateway_[^v][^2][^_]*", - "**/*_api_gateway_vpc_link*", - "**/api_gateway_[^v][^2][^_]*", - "**/api_gateway_vpc_link*" - ] - "service/apigatewayv2" = [ - "aws/internal/service/apigatewayv2/**/*", - "**/*_api_gateway_v2_*", - "**/*_apigatewayv2_*", - "**/api_gateway_v2_*", - "**/apigatewayv2_*" - ] - "service/appconfig" = [ - "aws/internal/service/appconfig/**/*", - "**/*_appconfig_*", - "**/appconfig_*" - ] - "service/applicationautoscaling" = [ - "aws/internal/service/applicationautoscaling/**/*", - "**/*_appautoscaling_*", - "**/appautoscaling_*" - ] - "service/applicationinsights" = [ - "aws/internal/service/applicationinsights/**/*", - "**/*_applicationinsights_*", - "**/applicationinsights_*" - ] - "service/appmesh" = [ - "aws/internal/service/appmesh/**/*", - "**/*_appmesh_*", - "**/appmesh_*" - ] - "service/apprunner" = [ - "aws/internal/service/apprunner/**/*", - "**/*_apprunner_*", - "**/apprunner_*" - ] - "service/appstream" = [ - "aws/internal/service/appstream/**/*", - "**/*_appstream_*", - "**/appstream_*" - ] - "service/appsync" = [ - "aws/internal/service/appsync/**/*", - "**/*_appsync_*", - "**/appsync_*" - ] - "service/athena" = [ - "aws/internal/service/athena/**/*", - "**/*_athena_*", - "**/athena_*" - ] - "service/auditmanager" = [ - "aws/internal/service/auditmanager/**/*", - "**/*_auditmanager_*", - "**/auditmanager_*" - ] - "service/autoscaling" = [ - "aws/internal/service/autoscaling/**/*", - "**/*_autoscaling_*", - "**/autoscaling_*", - "aws/*_aws_launch_configuration*", - "website/**/launch_configuration*" - ] - "service/autoscalingplans" = [ - "aws/internal/service/autoscalingplans/**/*", - "**/*_autoscalingplans_*", - "**/autoscalingplans_*" - ] - "service/backup" = [ - "aws/internal/service/backup/**/*", - "**/*backup_*", - "**/backup_*" - ] - "service/batch" = [ - "aws/internal/service/batch/**/*", - "**/*_batch_*", - "**/batch_*" - ] - "service/budgets" = [ - "aws/internal/service/budgets/**/*", - "**/*_budgets_*", - "**/budgets_*" - ] - "service/cloud9" = [ - "aws/internal/service/cloud9/**/*", - "**/*_cloud9_*", - "**/cloud9_*" - ] - "service/clouddirectory" = [ - "aws/internal/service/clouddirectory/**/*", - "**/*_clouddirectory_*", - "**/clouddirectory_*" - ] - "service/cloudformation" = [ - "aws/internal/service/cloudformation/**/*", - "**/*_cloudformation_*", - "**/cloudformation_*" - ] - "service/cloudfront" = [ - "aws/internal/service/cloudfront/**/*", - "**/*_cloudfront_*", - "**/cloudfront_*" - ] - "service/cloudhsmv2" = [ - "aws/internal/service/cloudhsmv2/**/*", - "**/*_cloudhsm_v2_*", - "**/cloudhsm_v2_*" - ] - "service/cloudsearch" = [ - "aws/internal/service/cloudsearch/**/*", - "**/*_cloudsearch_*", - "**/cloudsearch_*" - ] - "service/cloudtrail" = [ - "aws/internal/service/cloudtrail/**/*", - "**/*_cloudtrail*", - "**/cloudtrail*" - ] - "service/cloudwatch" = [ - "aws/internal/service/cloudwatch/**/*", - "**/*_cloudwatch_dashboard*", - "**/*_cloudwatch_metric_alarm*", - "**/cloudwatch_dashboard*", - "**/cloudwatch_metric_alarm*" - ] - "service/cloudwatchevents" = [ - "aws/internal/service/cloudwatchevents/**/*", - "**/*_cloudwatch_event_*", - "**/cloudwatch_event_*" - ] - "service/cloudwatchlogs" = [ - "aws/internal/service/cloudwatchlogs/**/*", - "**/*_cloudwatch_log_*", - "**/cloudwatch_log_*", - "**/*_cloudwatch_query_definition*", - "**/cloudwatch_query_definition*" - ] - "service/codeartifact" = [ - "aws/internal/service/codeartifact/**/*", - "**/*_codeartifact_*", - "**/codeartifact_*" - ] - "service/codebuild" = [ - "aws/internal/service/codebuild/**/*", - "**/*_codebuild_*", - "**/codebuild_*" - ] - "service/codecommit" = [ - "aws/internal/service/codecommit/**/*", - "**/*_codecommit_*", - "**/codecommit_*" - ] - "service/codedeploy" = [ - "aws/internal/service/codedeploy/**/*", - "**/*_codedeploy_*", - "**/codedeploy_*" - ] - "service/codepipeline" = [ - "aws/internal/service/codepipeline/**/*", - "**/*_codepipeline_*", - "**/codepipeline_*" - ] - "service/codestar" = [ - "aws/internal/service/codestar/**/*", - "**/*_codestar_*", - "**/codestar_*" - ] - "service/codestarconnections" = [ - "aws/internal/service/codestarconnections/**/*", - "**/*_codestarconnections_*", - "**/codestarconnections_*" - ] - "service/codestarnotifications" = [ - "aws/internal/service/codestarnotifications/**/*", - "**/*_codestarnotifications_*", - "**/codestarnotifications_*" - ] - "service/cognito" = [ - "aws/internal/service/cognitoidentity/**/*", - "aws/internal/service/cognitoidentityprovider/**/*", - "**/*_cognito_*", - "**/cognito_*" - ] - "service/comprehend" = [ - "aws/internal/service/comprehend/**/*", - "**/*_comprehend_*", - "**/comprehend_*" - ] - "service/configservice" = [ - "aws/internal/service/configservice/**/*", - "aws/*_aws_config_*", - "website/**/config_*" - ] - "service/connect" = [ - "aws/internal/service/connect/**/*", - "aws/*_aws_connect_*", - "website/**/connect_*" - ] - "service/costandusagereportservice" = [ - "aws/internal/service/costandusagereportservice/**/*", - "aws/*_aws_cur_*", - "website/**/cur_*" - ] - "service/databasemigrationservice" = [ - "aws/internal/service/databasemigrationservice/**/*", - "**/*_dms_*", - "**/dms_*" - ] - "service/dataexchange" = [ - "aws/internal/service/dataexchange/**/*", - "**/*_dataexchange_*", - "**/dataexchange_*", - ] - "service/datapipeline" = [ - "aws/internal/service/datapipeline/**/*", - "**/*_datapipeline_*", - "**/datapipeline_*", - ] - "service/datasync" = [ - "aws/internal/service/datasync/**/*", - "**/*_datasync_*", - "**/datasync_*", - ] - "service/dax" = [ - "aws/internal/service/dax/**/*", - "**/*_dax_*", - "**/dax_*" - ] - "service/detective" = [ - "aws/internal/service/detective/**/*", - "**/*_detective_*", - "**/detective_*" - ] - "service/devicefarm" = [ - "aws/internal/service/devicefarm/**/*", - "**/*_devicefarm_*", - "**/devicefarm_*" - ] - "service/directconnect" = [ - "aws/internal/service/directconnect/**/*", - "**/*_dx_*", - "**/dx_*" - ] - "service/directoryservice" = [ - "aws/internal/service/directoryservice/**/*", - "**/*_directory_service_*", - "**/directory_service_*" - ] - "service/dlm" = [ - "aws/internal/service/dlm/**/*", - "**/*_dlm_*", - "**/dlm_*" - ] - "service/docdb" = [ - "aws/internal/service/docdb/**/*", - "**/*_docdb_*", - "**/docdb_*" - ] - "service/dynamodb" = [ - "aws/internal/service/dynamodb/**/*", - "**/*_dynamodb_*", - "**/dynamodb_*" - ] - # Special casing this one because the files aren't _ec2_ - "service/ec2" = [ - "aws/internal/service/ec2/**/*", - "**/*_ec2_*", - "**/ec2_*", - "aws/*_aws_ami*", - "aws/*_aws_availability_zone*", - "aws/*_aws_customer_gateway*", - "aws/*_aws_default_network_acl*", - "aws/*_aws_default_route_table*", - "aws/*_aws_default_security_group*", - "aws/*_aws_default_subnet*", - "aws/*_aws_default_vpc*", - "aws/*_aws_ebs_*", - "aws/*_aws_egress_only_internet_gateway*", - "aws/*_aws_eip*", - "aws/*_aws_flow_log*", - "aws/*_aws_instance*", - "aws/*_aws_internet_gateway*", - "aws/*_aws_key_pair*", - "aws/*_aws_launch_template*", - "aws/*_aws_main_route_table_association*", - "aws/*_aws_nat_gateway*", - "aws/*_aws_network_acl*", - "aws/*_aws_network_interface*", - "aws/*_aws_placement_group*", - "aws/*_aws_prefix_list*", - "aws/*_aws_route_table*", - "aws/*_aws_route.*", - "aws/*_aws_security_group*", - "aws/*_aws_snapshot_create_volume_permission*", - "aws/*_aws_spot*", - "aws/*_aws_subnet*", - "aws/*_aws_vpc*", - "aws/*_aws_vpn*", - "aws/*_aws_volume_attachment*", - "website/**/availability_zone*", - "website/**/customer_gateway*", - "website/**/default_network_acl*", - "website/**/default_route_table*", - "website/**/default_security_group*", - "website/**/default_subnet*", - "website/**/default_vpc*", - "website/**/ebs_*", - "website/**/egress_only_internet_gateway*", - "website/**/eip*", - "website/**/flow_log*", - "website/**/instance*", - "website/**/internet_gateway*", - "website/**/key_pair*", - "website/**/launch_template*", - "website/**/main_route_table_association*", - "website/**/nat_gateway*", - "website/**/network_acl*", - "website/**/network_interface*", - "website/**/placement_group*", - "website/**/prefix_list*", - "website/**/route_table*", - "website/**/route.*", - "website/**/security_group*", - "website/**/snapshot_create_volume_permission*", - "website/**/spot_*", - "website/**/subnet*", - "website/**/vpc*", - "website/**/vpn*", - "website/**/volume_attachment*" - ] - "service/ecr" = [ - "aws/internal/service/ecr/**/*", - "**/*_ecr_*", - "**/ecr_*" - ] - "service/ecrpublic" = [ - "aws/internal/service/ecrpublic/**/*", - "**/*_ecrpublic_*", - "**/ecrpublic_*" - ] - "service/ecs" = [ - "aws/internal/service/ecs/**/*", - "**/*_ecs_*", - "**/ecs_*" - ] - "service/efs" = [ - "aws/internal/service/efs/**/*", - "**/*_efs_*", - "**/efs_*" - ] - "service/eks" = [ - "aws/internal/service/eks/**/*", - "**/*_eks_*", - "**/eks_*" - ] - "service/elastic-transcoder" = [ - "aws/internal/service/elastictranscoder/**/*", - "**/*_elastictranscoder_*", - "**/elastictranscoder_*", - "**/*_elastic_transcoder_*", - "**/elastic_transcoder_*" - ] - "service/elasticache" = [ - "aws/internal/service/elasticache/**/*", - "**/*_elasticache_*", - "**/elasticache_*" - ] - "service/elasticbeanstalk" = [ - "aws/internal/service/elasticbeanstalk/**/*", - "**/*_elastic_beanstalk_*", - "**/elastic_beanstalk_*" - ] - "service/elasticsearch" = [ - "aws/internal/service/elasticsearchservice/**/*", - "**/*_elasticsearch_*", - "**/elasticsearch_*", - "**/*_elasticsearchservice*" - ] - "service/elb" = [ - "aws/internal/service/elb/**/*", - "aws/*_aws_app_cookie_stickiness_policy*", - "aws/*_aws_elb*", - "aws/*_aws_lb_cookie_stickiness_policy*", - "aws/*_aws_lb_ssl_negotiation_policy*", - "aws/*_aws_load_balancer*", - "aws/*_aws_proxy_protocol_policy*", - "website/**/app_cookie_stickiness_policy*", - "website/**/elb*", - "website/**/lb_cookie_stickiness_policy*", - "website/**/lb_ssl_negotiation_policy*", - "website/**/load_balancer*", - "website/**/proxy_protocol_policy*" - ] - "service/elbv2" = [ - "aws/internal/service/elbv2/**/*", - "aws/*_lb.*", - "aws/*_lb_listener*", - "aws/*_lb_target_group*", - "website/**/lb.*", - "website/**/lb_listener*", - "website/**/lb_target_group*" - ] - "service/emr" = [ - "aws/internal/service/emr/**/*", - "**/*_emr_*", - "**/emr_*" - ] - "service/emrcontainers" = [ - "aws/internal/service/emrcontainers/**/*", - "**/*_emrcontainers_*", - "**/emrcontainers_*" - ] - "service/eventbridge" = [ - # EventBridge is rebranded CloudWatch Events - "aws/internal/service/cloudwatchevents/**/*", - "**/*_cloudwatch_event_*", - "**/cloudwatch_event_*" - ] - "service/firehose" = [ - "aws/internal/service/firehose/**/*", - "**/*_firehose_*", - "**/firehose_*" - ] - "service/fms" = [ - "aws/internal/service/fms/**/*", - "**/*_fms_*", - "**/fms_*" - ] - "service/fsx" = [ - "aws/internal/service/fsx/**/*", - "**/*_fsx_*", - "**/fsx_*" - ] - "service/gamelift" = [ - "aws/internal/service/gamelift/**/*", - "**/*_gamelift_*", - "**/gamelift_*" - ] - "service/glacier" = [ - "aws/internal/service/glacier/**/*", - "**/*_glacier_*", - "**/glacier_*" - ] - "service/globalaccelerator" = [ - "aws/internal/service/globalaccelerator/**/*", - "**/*_globalaccelerator_*", - "**/globalaccelerator_*" - ] - "service/glue" = [ - "aws/internal/service/glue/**/*", - "**/*_glue_*", - "**/glue_*" - ] - "service/greengrass" = [ - "aws/internal/service/greengrass/**/*", - "**/*_greengrass_*", - "**/greengrass_*" - ] - "service/guardduty" = [ - "aws/internal/service/guardduty/**/*", - "**/*_guardduty_*", - "**/guardduty_*" - ] - "service/iam" = [ - "aws/internal/service/iam/**/*", - "**/*_iam_*", - "**/iam_*" - ] - "service/identitystore" = [ - "aws/internal/service/identitystore/**/*", - "**/*_identitystore_*", - "**/identitystore_*" - ] - "service/imagebuilder" = [ - "aws/internal/service/imagebuilder/**/*", - "**/*_imagebuilder_*", - "**/imagebuilder_*" - ] - "service/inspector" = [ - "aws/internal/service/inspector/**/*", - "**/*_inspector_*", - "**/inspector_*" - ] - "service/iot" = [ - "aws/internal/service/iot/**/*", - "**/*_iot_*", - "**/iot_*" - ] - "service/iotanalytics" = [ - "aws/internal/service/iotanalytics/**/*", - "**/*_iotanalytics_*", - "**/iotanalytics_*" - ] - "service/iotevents" = [ - "aws/internal/service/iotevents/**/*", - "**/*_iotevents_*", - "**/iotevents_*" - ] - "service/kafka" = [ - "aws/internal/service/kafka/**/*", - "**/*_msk_*", - "**/msk_*", - ] - "service/kinesis" = [ - "aws/internal/service/kinesis/**/*", - "aws/*_aws_kinesis_stream*", - "website/kinesis_stream*" - ] - "service/kinesisanalytics" = [ - "aws/internal/service/kinesisanalytics/**/*", - "**/*_kinesis_analytics_*", - "**/kinesis_analytics_*" - ] - "service/kinesisanalyticsv2" = [ - "aws/internal/service/kinesisanalyticsv2/**/*", - "**/*_kinesisanalyticsv2_*", - "**/kinesisanalyticsv2_*" - ] - "service/kms" = [ - "aws/internal/service/kms/**/*", - "**/*_kms_*", - "**/kms_*" - ] - "service/lakeformation" = [ - "aws/internal/service/lakeformation/**/*", - "**/*_lakeformation_*", - "**/lakeformation_*" - ] - "service/lambda" = [ - "aws/internal/service/lambda/**/*", - "**/*_lambda_*", - "**/lambda_*" - ] - "service/lexmodelbuildingservice" = [ - "aws/internal/service/lexmodelbuildingservice/**/*", - "**/*_lex_*", - "**/lex_*" - ] - "service/licensemanager" = [ - "aws/internal/service/licensemanager/**/*", - "**/*_licensemanager_*", - "**/licensemanager_*" - ] - "service/lightsail" = [ - "aws/internal/service/lightsail/**/*", - "**/*_lightsail_*", - "**/lightsail_*" - ] - "service/machinelearning" = [ - "aws/internal/service/machinelearning/**/*", - "**/*_machinelearning_*", - "**/machinelearning_*" - ] - "service/macie" = [ - "aws/internal/service/macie/**/*", - "**/*_macie_*", - "**/macie_*" - ] - "service/macie2" = [ - "aws/internal/service/macie2/**/*", - "**/*_macie2_*", - "**/macie2_*" - ] - "service/marketplacecatalog" = [ - "aws/internal/service/marketplacecatalog/**/*", - "**/*_marketplace_catalog_*", - "**/marketplace_catalog_*" - ] - "service/mediaconnect" = [ - "aws/internal/service/mediaconnect/**/*", - "**/*_media_connect_*", - "**/media_connect_*" - ] - "service/mediaconvert" = [ - "aws/internal/service/mediaconvert/**/*", - "**/*_media_convert_*", - "**/media_convert_*" - ] - "service/medialive" = [ - "aws/internal/service/medialive/**/*", - "**/*_media_live_*", - "**/media_live_*" - ] - "service/mediapackage" = [ - "aws/internal/service/mediapackage/**/*", - "**/*_media_package_*", - "**/media_package_*" - ] - "service/mediastore" = [ - "aws/internal/service/mediastore/**/*", - "**/*_media_store_*", - "**/media_store_*" - ] - "service/mediatailor" = [ - "aws/internal/service/mediatailor/**/*", - "**/*_media_tailor_*", - "**/media_tailor_*", - ] - "service/mobile" = [ - "aws/internal/service/mobile/**/*", - "**/*_mobile_*", - "**/mobile_*" - ], - "service/mq" = [ - "aws/internal/service/mq/**/*", - "**/*_mq_*", - "**/mq_*" - ] - "service/mwaa" = [ - "aws/internal/service/mwaa/**/*", - "**/*_mwaa_*", - "**/mwaa_*" - ] - "service/neptune" = [ - "aws/internal/service/neptune/**/*", - "**/*_neptune_*", - "**/neptune_*" - ] - "service/networkfirewall" = [ - "aws/internal/service/networkfirewall/**/*", - "**/*_networkfirewall_*", - "**/networkfirewall_*", - ] - "service/networkmanager" = [ - "aws/internal/service/networkmanager/**/*", - "**/*_networkmanager_*", - "**/networkmanager_*" - ] - "service/opsworks" = [ - "aws/internal/service/opsworks/**/*", - "**/*_opsworks_*", - "**/opsworks_*" - ] - "service/organizations" = [ - "aws/internal/service/organizations/**/*", - "**/*_organizations_*", - "**/organizations_*" - ] - "service/outposts" = [ - "aws/internal/service/outposts/**/*", - "**/*_outposts_*", - "**/outposts_*" - ] - "service/pinpoint" = [ - "aws/internal/service/pinpoint/**/*", - "**/*_pinpoint_*", - "**/pinpoint_*" - ] - "service/polly" = [ - "aws/internal/service/polly/**/*", - "**/*_polly_*", - "**/polly_*" - ] - "service/pricing" = [ - "aws/internal/service/pricing/**/*", - "**/*_pricing_*", - "**/pricing_*" - ] - "service/prometheusservice" = [ - "aws/internal/service/prometheus/**/*", - "**/*_prometheus_*", - "**/prometheus_*", - ] - "service/qldb" = [ - "aws/internal/service/qldb/**/*", - "**/*_qldb_*", - "**/qldb_*" - ] - "service/quicksight" = [ - "aws/internal/service/quicksight/**/*", - "**/*_quicksight_*", - "**/quicksight_*" - ] - "service/ram" = [ - "aws/internal/service/ram/**/*", - "**/*_ram_*", - "**/ram_*" - ] - "service/rds" = [ - "aws/internal/service/rds/**/*", - "aws/*_aws_db_*", - "aws/*_aws_rds_*", - "website/**/db_*", - "website/**/rds_*" - ] - "service/redshift" = [ - "aws/internal/service/redshift/**/*", - "**/*_redshift_*", - "**/redshift_*" - ] - "service/resourcegroups" = [ - "aws/internal/service/resourcegroups/**/*", - "**/*_resourcegroups_*", - "**/resourcegroups_*" - ] - "service/resourcegroupstaggingapi" = [ - "aws/internal/service/resourcegroupstaggingapi/**/*", - "**/*_resourcegroupstaggingapi_*", - "**/resourcegroupstaggingapi_*" - ] - "service/robomaker" = [ - "aws/internal/service/robomaker/**/*", - "**/*_robomaker_*", - "**/robomaker_*", - ] - "service/route53" = [ - "aws/internal/service/route53/**/*", - "**/*_route53_delegation_set*", - "**/*_route53_health_check*", - "**/*_route53_query_log*", - "**/*_route53_record*", - "**/*_route53_vpc_association_authorization*", - "**/*_route53_zone*", - "**/route53_delegation_set*", - "**/route53_health_check*", - "**/route53_query_log*", - "**/route53_record*", - "**/route53_vpc_association_authorization*", - "**/route53_zone*" - ] - "service/route53domains" = [ - "aws/internal/service/route53domains/**/*", - "**/*_route53domains_*", - "**/route53domains_*" - ] - "service/route53resolver" = [ - "aws/internal/service/route53resolver/**/*", - "**/*_route53_resolver_*", - "**/route53_resolver_*" - ] - "service/s3" = [ - "aws/internal/service/s3/**/*", - "**/*_s3_bucket*", - "**/s3_bucket*", - "**/*_s3_object*", - "**/s3_object*", - "aws/*_aws_canonical_user_id*", - "website/**/canonical_user_id*" - ] - "service/s3control" = [ - "aws/internal/service/s3control/**/*", - "**/*_s3_account_*", - "**/s3_account_*", - "**/*_s3control_*", - "**/s3control_*" - ] - "service/s3outposts" = [ - "aws/internal/service/s3outposts/**/*", - "**/*_s3outposts_*", - "**/s3outposts_*" - ] - "service/sagemaker" = [ - "aws/internal/service/sagemaker/**/*", - "**/*_sagemaker_*", - "**/sagemaker_*" - ] - "service/secretsmanager" = [ - "aws/internal/service/secretsmanager/**/*", - "**/*_secretsmanager_*", - "**/secretsmanager_*" - ] - "service/securityhub" = [ - "aws/internal/service/securityhub/**/*", - "**/*_securityhub_*", - "**/securityhub_*" - ] - "service/serverlessapplicationrepository" = [ - "aws/internal/service/serverlessapplicationrepository/**/*", - "**/*_serverlessapplicationrepository_*", - "**/serverlessapplicationrepository_*" - ] - "service/servicecatalog" = [ - "aws/internal/service/servicecatalog/**/*", - "**/*_servicecatalog_*", - "**/servicecatalog_*" - ] - "service/servicediscovery" = [ - "aws/internal/service/servicediscovery/**/*", - "**/*_service_discovery_*", - "**/service_discovery_*" - ] - "service/servicequotas" = [ - "aws/internal/service/servicequotas/**/*", - "**/*_servicequotas_*", - "**/servicequotas_*" - ] - "service/ses" = [ - "aws/internal/service/ses/**/*", - "**/*_ses_*", - "**/ses_*" - ] - "service/sfn" = [ - "aws/internal/service/sfn/**/*", - "**/*_sfn_*", - "**/sfn_*" - ] - "service/shield" = [ - "aws/internal/service/shield/**/*", - "**/*_shield_*", - "**/shield_*", - ], - "service/signer" = [ - "**/*_signer_*", - "**/signer_*" - ] - "service/simpledb" = [ - "aws/internal/service/simpledb/**/*", - "**/*_simpledb_*", - "**/simpledb_*" - ] - "service/snowball" = [ - "aws/internal/service/snowball/**/*", - "**/*_snowball_*", - "**/snowball_*" - ] - "service/sns" = [ - "aws/internal/service/sns/**/*", - "**/*_sns_*", - "**/sns_*" - ] - "service/sqs" = [ - "aws/internal/service/sqs/**/*", - "**/*_sqs_*", - "**/sqs_*" - ] - "service/ssm" = [ - "aws/internal/service/ssm/**/*", - "**/*_ssm_*", - "**/ssm_*" - ] - "service/ssoadmin" = [ - "aws/internal/service/ssoadmin/**/*", - "**/*_ssoadmin_*", - "**/ssoadmin_*" - ] - "service/storagegateway" = [ - "aws/internal/service/storagegateway/**/*", - "**/*_storagegateway_*", - "**/storagegateway_*" - ] - "service/sts" = [ - "aws/internal/service/sts/**/*", - "aws/*_aws_caller_identity*", - "website/**/caller_identity*" - ] - "service/swf" = [ - "aws/internal/service/swf/**/*", - "**/*_swf_*", - "**/swf_*" - ] - "service/synthetics" = [ - "aws/internal/service/synthetics/**/*", - "**/*_synthetics_*", - "**/synthetics_*" - ] - "service/timestreamwrite" = [ - "aws/internal/service/timestreamwrite/**/*", - "**/*_timestreamwrite_*", - "**/timestreamwrite_*" - ] - "service/transfer" = [ - "aws/internal/service/transfer/**/*", - "**/*_transfer_*", - "**/transfer_*" - ] - "service/waf" = [ - "aws/internal/service/waf/**/*", - "aws/internal/service/wafregional/**/*", - "**/*_waf_*", - "**/waf_*", - "**/*_wafregional_*", - "**/wafregional_*" - ] - "service/wafv2" = [ - "aws/internal/service/wafv2/**/*", - "**/*_wafv2_*", - "**/wafv2_*", - ] - "service/workdocs" = [ - "aws/internal/service/workdocs/**/*", - "**/*_workdocs_*", - "**/workdocs_*" - ] - "service/worklink" = [ - "aws/internal/service/worklink/**/*", - "**/*_worklink_*", - "**/worklink_*" - ] - "service/workmail" = [ - "aws/internal/service/workmail/**/*", - "**/*_workmail_*", - "**/workmail_*" - ] - "service/workspaces" = [ - "aws/internal/service/workspaces/**/*", - "**/*_workspaces_*", - "**/workspaces_*" - ] - "service/xray" = [ - "aws/internal/service/xray/**/*", - "**/*_xray_*", - "**/xray_*" - ] - } -} - -behavior "remove_labels_on_reply" "remove_stale" { - labels = ["waiting-response", "stale"] - only_non_maintainers = true -} - behavior "pull_request_size_labeler" "size" { label_prefix = "size/" label_map = { diff --git a/CHANGELOG.md b/CHANGELOG.md index 861ed880e64..5cebc142d93 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,83 @@ +## 3.44.0 (June 03, 2021) + +FEATURES: + +* **New Resource:** `aws_amplify_branch` ([#11937](https://github.com/hashicorp/terraform-provider-aws/issues/11937)) +* **New Resource:** `aws_amplify_domain_association` ([#11938](https://github.com/hashicorp/terraform-provider-aws/issues/11938)) +* **New Resource:** `aws_amplify_webhook` ([#11939](https://github.com/hashicorp/terraform-provider-aws/issues/11939)) +* **New Resource:** `aws_servicecatalog_principal_portfolio_association` ([#19470](https://github.com/hashicorp/terraform-provider-aws/issues/19470)) + +ENHANCEMENTS: + +* data-source/aws_launch_configuration: Add `throughput` attribute to `ebs_block_device` and `root_block_device` configuration blocks to support GP3 volumes ([#19632](https://github.com/hashicorp/terraform-provider-aws/issues/19632)) +* resource/aws_acmpca_certificate_authority: Add `s3_object_acl` argument to `revocation_configuration.crl_configuration` configuration block ([#19578](https://github.com/hashicorp/terraform-provider-aws/issues/19578)) +* resource/aws_cloudwatch_log_metric_filter: Add `dimensions` argument to `metric_transformation` configuration block ([#19625](https://github.com/hashicorp/terraform-provider-aws/issues/19625)) +* resource/aws_cloudwatch_metric_alarm: Add plan time validation to `metric_query.metric.stat`. ([#19571](https://github.com/hashicorp/terraform-provider-aws/issues/19571)) +* resource/aws_devicefarm_project: Add `default_job_timeout_minutes` and `tags` argument ([#19574](https://github.com/hashicorp/terraform-provider-aws/issues/19574)) +* resource/aws_devicefarm_project: Add plan time validation for `name` ([#19574](https://github.com/hashicorp/terraform-provider-aws/issues/19574)) +* resource/aws_fsx_lustre_filesystem: Allow updating `storage_capacity`. ([#19568](https://github.com/hashicorp/terraform-provider-aws/issues/19568)) +* resource/aws_launch_configuration: Add `throughput` argument to `ebs_block_device` and `root_block_device` configuration blocks to support GP3 volumes ([#19632](https://github.com/hashicorp/terraform-provider-aws/issues/19632)) + +BUG FIXES: + +* resource/aws_amplify_app: Mark the `enable_performance_mode` argument in the `auto_branch_creation_config` configuration block as `ForceNew` ([#11937](https://github.com/hashicorp/terraform-provider-aws/issues/11937)) +* resource/aws_cloudwatch_event_api_destination: Fix crash on resource update ([#19654](https://github.com/hashicorp/terraform-provider-aws/issues/19654)) +* resource/aws_elasticache_cluster: Fix provider-level `default_tags` support for resource ([#19615](https://github.com/hashicorp/terraform-provider-aws/issues/19615)) +* resource/aws_iam_access_key: Fix status not defaulting to Active ([#19606](https://github.com/hashicorp/terraform-provider-aws/issues/19606)) + +## 3.43.0 (June 01, 2021) + +FEATURES: + +* **New Data Source:** `aws_cloudwatch_event_connection` ([#18905](https://github.com/hashicorp/terraform-provider-aws/issues/18905)) +* **New Resource:** `aws_amplify_app` ([#15966](https://github.com/hashicorp/terraform-provider-aws/issues/15966)) +* **New Resource:** `aws_amplify_backend_environment` ([#11936](https://github.com/hashicorp/terraform-provider-aws/issues/11936)) +* **New Resource:** `aws_cloudwatch_event_api_destination` ([#18905](https://github.com/hashicorp/terraform-provider-aws/issues/18905)) +* **New Resource:** `aws_cloudwatch_event_connection` ([#18905](https://github.com/hashicorp/terraform-provider-aws/issues/18905)) +* **New Resource:** `aws_schemas_discoverer` ([#19100](https://github.com/hashicorp/terraform-provider-aws/issues/19100)) +* **New Resource:** `aws_schemas_registry` ([#19100](https://github.com/hashicorp/terraform-provider-aws/issues/19100)) +* **New Resource:** `aws_schemas_schema` ([#19100](https://github.com/hashicorp/terraform-provider-aws/issues/19100)) +* **New Resource:** `aws_servicecatalog_budget_resource_association` ([#19452](https://github.com/hashicorp/terraform-provider-aws/issues/19452)) +* **New Resource:** `aws_servicecatalog_provisioning_artifact` ([#19316](https://github.com/hashicorp/terraform-provider-aws/issues/19316)) +* **New Resource:** `aws_servicecatalog_tag_option_resource_association` ([#19448](https://github.com/hashicorp/terraform-provider-aws/issues/19448)) + +ENHANCEMENTS: + +* data-source/aws_msk_cluster: Add `bootstrap_brokers_sasl_iam` attribute ([#19404](https://github.com/hashicorp/terraform-provider-aws/issues/19404)) +* resource/aws_cloudfront_distribution: Add `connection_attempts`, `connection_timeout`, and `origin_shield`. ([#16049](https://github.com/hashicorp/terraform-provider-aws/issues/16049)) +* resource/aws_cloudtrail: Add `AWS::DynamoDB::Table` as an option for `event_selector`.`data_resource`.`type` ([#19559](https://github.com/hashicorp/terraform-provider-aws/issues/19559)) +* resource/aws_ec2_capacity_reservation: Add `outpost_arn` argument ([#19535](https://github.com/hashicorp/terraform-provider-aws/issues/19535)) +* resource/aws_ecs_service: Add support for ECS Anywhere with the `launch_type` `EXTERNAL` ([#19557](https://github.com/hashicorp/terraform-provider-aws/issues/19557)) +* resource/aws_eks_node_group: Add `taint` argument ([#19482](https://github.com/hashicorp/terraform-provider-aws/issues/19482)) +* resource/aws_elasticache_parameter_group: Add `tags` argument and `arn` and `tags_all` attributes ([#19551](https://github.com/hashicorp/terraform-provider-aws/issues/19551)) +* resource/aws_lambda_event_source_mapping: Add `function_response_types` argument to support AWS Lambda checkpointing ([#19425](https://github.com/hashicorp/terraform-provider-aws/issues/19425)) +* resource/aws_lambda_event_source_mapping: Add `queues` argument to support Amazon MQ for Apache ActiveMQ event sources ([#19425](https://github.com/hashicorp/terraform-provider-aws/issues/19425)) +* resource/aws_lambda_event_source_mapping: Add `self_managed_event_source` and `source_access_configuration` arguments to support self-managed Apache Kafka event sources ([#19425](https://github.com/hashicorp/terraform-provider-aws/issues/19425)) +* resource/aws_lambda_event_source_mapping: Add `tumbling_window_in_seconds` argument to support AWS Lambda streaming analytics calculations ([#19425](https://github.com/hashicorp/terraform-provider-aws/issues/19425)) +* resource/aws_msk_cluster: Add `bootstrap_brokers_sasl_iam` attribute ([#19404](https://github.com/hashicorp/terraform-provider-aws/issues/19404)) +* resource/aws_msk_cluster: Add `iam` argument to `client_authentication.sasl` configuration block ([#19404](https://github.com/hashicorp/terraform-provider-aws/issues/19404)) +* resource/aws_msk_configuration: `kafka_versions` argument is optional ([#17571](https://github.com/hashicorp/terraform-provider-aws/issues/17571)) +* resource/aws_sns_topic: Add `firehose_success_feedback_role_arn`, `firehose_success_feedback_sample_rate` and `firehose_failure_feedback_role_arn` arguments. ([#19528](https://github.com/hashicorp/terraform-provider-aws/issues/19528)) +* resource/aws_sns_topic: Add `owner` attribute. ([#19528](https://github.com/hashicorp/terraform-provider-aws/issues/19528)) +* resource/aws_sns_topic: Add plan time validation for `application_success_feedback_role_arn`, `application_failure_feedback_role_arn`, `http_success_feedback_role_arn`, `http_failure_feedback_role_arn`, `lambda_success_feedback_role_arn`, `lambda_failure_feedback_role_arn`, `sqs_success_feedback_role_arn`, `sqs_failure_feedback_role_arn`. ([#19528](https://github.com/hashicorp/terraform-provider-aws/issues/19528)) + +BUG FIXES: + +* data-source/aws_launch_template: Add `interface_type` to `network_interfaces` attribute ([#19492](https://github.com/hashicorp/terraform-provider-aws/issues/19492)) +* data-source/aws_mq_broker: Correct type for `logs.audit` attribute ([#19502](https://github.com/hashicorp/terraform-provider-aws/issues/19502)) +* resource/aws_apprunner_service: Correctly configure `authentication_configuration`, `code_configuration`, and `image_configuration` nested arguments in API requests ([#19471](https://github.com/hashicorp/terraform-provider-aws/issues/19471)) +* resource/aws_apprunner_service: Handle asynchronous IAM eventual consistency error on creation ([#19483](https://github.com/hashicorp/terraform-provider-aws/issues/19483)) +* resource/aws_apprunner_service: Suppress `instance_configuration` `cpu` and `memory` differences ([#19483](https://github.com/hashicorp/terraform-provider-aws/issues/19483)) +* resource/aws_batch_job_definition: Don't crash when setting `timeout.attempt_duration_seconds` to `null` ([#19505](https://github.com/hashicorp/terraform-provider-aws/issues/19505)) +* resource/aws_cloudformation_stack: Avoid conflicts with `on_failure` and `disable_rollback` ([#10539](https://github.com/hashicorp/terraform-provider-aws/issues/10539)) +* resource/aws_cloudwatch_event_api_destination: Reduce the maximum allowed value for the `invocation_rate_limit_per_second` argument to `300` ([#19594](https://github.com/hashicorp/terraform-provider-aws/issues/19594)) +* resource/aws_ec2_managed_prefix_list: Fix crash with multiple description-only updates ([#19517](https://github.com/hashicorp/terraform-provider-aws/issues/19517)) +* resource/aws_eks_addon: Use `service_account_role_arn`, if set, on updates ([#19454](https://github.com/hashicorp/terraform-provider-aws/issues/19454)) +* resource/aws_glue_connection: `connection_properties` are optional ([#19375](https://github.com/hashicorp/terraform-provider-aws/issues/19375)) +* resource/aws_lb_listener_rule: Allow blank string for `action.redirect.query` nested argument ([#19496](https://github.com/hashicorp/terraform-provider-aws/issues/19496)) +* resource/aws_synthetics_canary: Change minimum `timeout_in_seconds` in `run_config` from `60` to `3` ([#19515](https://github.com/hashicorp/terraform-provider-aws/issues/19515)) +* resource/aws_vpn_connection: Allow `local_ipv4_network_cidr`, `remote_ipv4_network_cidr`, `local_ipv6_network_cidr`, and `remote_ipv6_network_cidr` to be CIDRs of any size ([#17573](https://github.com/hashicorp/terraform-provider-aws/issues/17573)) + ## 3.42.0 (May 20, 2021) FEATURES: diff --git a/aws/cloudfront_distribution_configuration_structure.go b/aws/cloudfront_distribution_configuration_structure.go index 6ecfeba9d4c..bace2eca835 100644 --- a/aws/cloudfront_distribution_configuration_structure.go +++ b/aws/cloudfront_distribution_configuration_structure.go @@ -699,6 +699,13 @@ func expandOrigin(m map[string]interface{}) *cloudfront.Origin { Id: aws.String(m["origin_id"].(string)), DomainName: aws.String(m["domain_name"].(string)), } + + if v, ok := m["connection_attempts"]; ok { + origin.ConnectionAttempts = aws.Int64(int64(v.(int))) + } + if v, ok := m["connection_timeout"]; ok { + origin.ConnectionTimeout = aws.Int64(int64(v.(int))) + } if v, ok := m["custom_header"]; ok { origin.CustomHeaders = expandCustomHeaders(v.(*schema.Set)) } @@ -710,6 +717,13 @@ func expandOrigin(m map[string]interface{}) *cloudfront.Origin { if v, ok := m["origin_path"]; ok { origin.OriginPath = aws.String(v.(string)) } + + if v, ok := m["origin_shield"]; ok { + if s := v.([]interface{}); len(s) > 0 { + origin.OriginShield = expandOriginShield(s[0].(map[string]interface{})) + } + } + if v, ok := m["s3_origin_config"]; ok { if s := v.([]interface{}); len(s) > 0 { origin.S3OriginConfig = expandS3OriginConfig(s[0].(map[string]interface{})) @@ -731,6 +745,12 @@ func flattenOrigin(or *cloudfront.Origin) map[string]interface{} { m := make(map[string]interface{}) m["origin_id"] = aws.StringValue(or.Id) m["domain_name"] = aws.StringValue(or.DomainName) + if or.ConnectionAttempts != nil { + m["connection_attempts"] = int(aws.Int64Value(or.ConnectionAttempts)) + } + if or.ConnectionTimeout != nil { + m["connection_timeout"] = int(aws.Int64Value(or.ConnectionTimeout)) + } if or.CustomHeaders != nil { m["custom_header"] = flattenCustomHeaders(or.CustomHeaders) } @@ -740,6 +760,9 @@ func flattenOrigin(or *cloudfront.Origin) map[string]interface{} { if or.OriginPath != nil { m["origin_path"] = aws.StringValue(or.OriginPath) } + if or.OriginShield != nil && aws.BoolValue(or.OriginShield.Enabled) { + m["origin_shield"] = []interface{}{flattenOriginShield(or.OriginShield)} + } if or.S3OriginConfig != nil && aws.StringValue(or.S3OriginConfig.OriginAccessIdentity) != "" { m["s3_origin_config"] = []interface{}{flattenS3OriginConfig(or.S3OriginConfig)} } @@ -851,6 +874,12 @@ func originHash(v interface{}) int { m := v.(map[string]interface{}) buf.WriteString(fmt.Sprintf("%s-", m["origin_id"].(string))) buf.WriteString(fmt.Sprintf("%s-", m["domain_name"].(string))) + if v, ok := m["connection_attempts"]; ok { + buf.WriteString(fmt.Sprintf("%d-", v.(int))) + } + if v, ok := m["connection_timeout"]; ok { + buf.WriteString(fmt.Sprintf("%d-", v.(int))) + } if v, ok := m["custom_header"]; ok { buf.WriteString(fmt.Sprintf("%d-", customHeadersHash(v.(*schema.Set)))) } @@ -862,6 +891,13 @@ func originHash(v interface{}) int { if v, ok := m["origin_path"]; ok { buf.WriteString(fmt.Sprintf("%s-", v.(string))) } + + if v, ok := m["origin_shield"]; ok { + if s := v.([]interface{}); len(s) > 0 && s[0] != nil { + buf.WriteString(fmt.Sprintf("%d-", originShieldHash((s[0].(map[string]interface{}))))) + } + } + if v, ok := m["s3_origin_config"]; ok { if s := v.([]interface{}); len(s) > 0 && s[0] != nil { buf.WriteString(fmt.Sprintf("%d-", s3OriginConfigHash((s[0].(map[string]interface{}))))) @@ -1026,12 +1062,26 @@ func expandS3OriginConfig(m map[string]interface{}) *cloudfront.S3OriginConfig { } } +func expandOriginShield(m map[string]interface{}) *cloudfront.OriginShield { + return &cloudfront.OriginShield{ + Enabled: aws.Bool(m["enabled"].(bool)), + OriginShieldRegion: aws.String(m["origin_shield_region"].(string)), + } +} + func flattenS3OriginConfig(s3o *cloudfront.S3OriginConfig) map[string]interface{} { return map[string]interface{}{ "origin_access_identity": aws.StringValue(s3o.OriginAccessIdentity), } } +func flattenOriginShield(o *cloudfront.OriginShield) map[string]interface{} { + return map[string]interface{}{ + "origin_shield_region": aws.StringValue(o.OriginShieldRegion), + "enabled": aws.BoolValue(o.Enabled), + } +} + // Assemble the hash for the aws_cloudfront_distribution s3_origin_config // TypeSet attribute. func s3OriginConfigHash(v interface{}) int { @@ -1041,6 +1091,14 @@ func s3OriginConfigHash(v interface{}) int { return hashcode.String(buf.String()) } +func originShieldHash(v interface{}) int { + var buf bytes.Buffer + m := v.(map[string]interface{}) + buf.WriteString(fmt.Sprintf("%t-", m["enabled"].(bool))) + buf.WriteString(fmt.Sprintf("%s-", m["origin_shield_region"].(string))) + return hashcode.String(buf.String()) +} + func expandCustomErrorResponses(s *schema.Set) *cloudfront.CustomErrorResponses { qty := 0 items := []*cloudfront.CustomErrorResponse{} diff --git a/aws/cloudfront_distribution_configuration_structure_test.go b/aws/cloudfront_distribution_configuration_structure_test.go index 9ec4495829e..b1ffdc2e859 100644 --- a/aws/cloudfront_distribution_configuration_structure_test.go +++ b/aws/cloudfront_distribution_configuration_structure_test.go @@ -136,6 +136,13 @@ func customOriginSslProtocolsConf() *schema.Set { return schema.NewSet(schema.HashString, []interface{}{"SSLv3", "TLSv1", "TLSv1.1", "TLSv1.2"}) } +func originShield() map[string]interface{} { + return map[string]interface{}{ + "enabled": true, + "origin_shield_region": "testRegion", + } +} + func s3OriginConf() map[string]interface{} { return map[string]interface{}{ "origin_access_identity": "origin-access-identity/cloudfront/E127EXAMPLE51Z", @@ -151,6 +158,7 @@ func originWithCustomConf() map[string]interface{} { "custom_header": originCustomHeadersConf(), } } + func originWithS3Conf() map[string]interface{} { return map[string]interface{}{ "origin_id": "S3Origin", @@ -816,6 +824,27 @@ func TestCloudFrontStructure_flattenCustomOriginConfigSSL(t *testing.T) { } } +func TestCloudFrontStructure_expandOriginShield(t *testing.T) { + data := originShield() + o := expandOriginShield(data) + if *o.Enabled != true { + t.Fatalf("Expected Enabled to be true, got %v", *o.Enabled) + } + if *o.OriginShieldRegion != "testRegion" { + t.Fatalf("Expected OriginShieldRegion to be testRegion, got %v", *o.OriginShieldRegion) + } +} + +func TestCloudFrontStructure_flattenOriginShield(t *testing.T) { + in := originShield() + o := expandOriginShield(in) + out := flattenOriginShield(o) + + if !reflect.DeepEqual(in, out) { + t.Fatalf("Expected out to be %v, got %v", in, out) + } +} + func TestCloudFrontStructure_expandS3OriginConfig(t *testing.T) { data := s3OriginConf() s3o := expandS3OriginConfig(data) diff --git a/aws/config.go b/aws/config.go index 372e3401250..f436c240284 100644 --- a/aws/config.go +++ b/aws/config.go @@ -28,6 +28,7 @@ import ( "github.com/aws/aws-sdk-go/service/backup" "github.com/aws/aws-sdk-go/service/batch" "github.com/aws/aws-sdk-go/service/budgets" + "github.com/aws/aws-sdk-go/service/chime" "github.com/aws/aws-sdk-go/service/cloud9" "github.com/aws/aws-sdk-go/service/cloudformation" "github.com/aws/aws-sdk-go/service/cloudfront" @@ -103,6 +104,7 @@ import ( "github.com/aws/aws-sdk-go/service/lexmodelbuildingservice" "github.com/aws/aws-sdk-go/service/licensemanager" "github.com/aws/aws-sdk-go/service/lightsail" + "github.com/aws/aws-sdk-go/service/locationservice" "github.com/aws/aws-sdk-go/service/macie" "github.com/aws/aws-sdk-go/service/macie2" "github.com/aws/aws-sdk-go/service/managedblockchain" @@ -139,6 +141,7 @@ import ( "github.com/aws/aws-sdk-go/service/s3control" "github.com/aws/aws-sdk-go/service/s3outposts" "github.com/aws/aws-sdk-go/service/sagemaker" + "github.com/aws/aws-sdk-go/service/schemas" "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/aws/aws-sdk-go/service/securityhub" "github.com/aws/aws-sdk-go/service/serverlessapplicationrepository" @@ -233,6 +236,7 @@ type AWSClient struct { batchconn *batch.Batch budgetconn *budgets.Budgets cfconn *cloudformation.CloudFormation + chimeconn *chime.Chime cloud9conn *cloud9.Cloud9 cloudfrontconn *cloudfront.CloudFront cloudhsmv2conn *cloudhsmv2.CloudHSMV2 @@ -310,6 +314,7 @@ type AWSClient struct { lexmodelconn *lexmodelbuildingservice.LexModelBuildingService licensemanagerconn *licensemanager.LicenseManager lightsailconn *lightsail.Lightsail + locationconn *locationservice.LocationService macieconn *macie.Macie macie2conn *macie2.Macie2 managedblockchainconn *managedblockchain.ManagedBlockchain @@ -352,6 +357,7 @@ type AWSClient struct { s3outpostsconn *s3outposts.S3Outposts sagemakerconn *sagemaker.SageMaker scconn *servicecatalog.ServiceCatalog + schemasconn *schemas.Schemas sdconn *servicediscovery.ServiceDiscovery secretsmanagerconn *secretsmanager.SecretsManager securityhubconn *securityhub.SecurityHub @@ -481,6 +487,7 @@ func (c *Config) Client() (interface{}, error) { batchconn: batch.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["batch"])})), budgetconn: budgets.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["budgets"])})), cfconn: cloudformation.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["cloudformation"])})), + chimeconn: chime.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["chime"])})), cloud9conn: cloud9.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["cloud9"])})), cloudfrontconn: cloudfront.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["cloudfront"])})), cloudhsmv2conn: cloudhsmv2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["cloudhsm"])})), @@ -557,6 +564,7 @@ func (c *Config) Client() (interface{}, error) { lexmodelconn: lexmodelbuildingservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["lexmodels"])})), licensemanagerconn: licensemanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["licensemanager"])})), lightsailconn: lightsail.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["lightsail"])})), + locationconn: locationservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["location"])})), macieconn: macie.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["macie"])})), macie2conn: macie2.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["macie2"])})), managedblockchainconn: managedblockchain.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["managedblockchain"])})), @@ -595,6 +603,7 @@ func (c *Config) Client() (interface{}, error) { s3outpostsconn: s3outposts.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["s3outposts"])})), sagemakerconn: sagemaker.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["sagemaker"])})), scconn: servicecatalog.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["servicecatalog"])})), + schemasconn: schemas.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["schemas"])})), sdconn: servicediscovery.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["servicediscovery"])})), secretsmanagerconn: secretsmanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["secretsmanager"])})), securityhubconn: securityhub.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["securityhub"])})), diff --git a/aws/data_source_aws_cloudwatch_event_connection.go b/aws/data_source_aws_cloudwatch_event_connection.go new file mode 100644 index 00000000000..4c46028e192 --- /dev/null +++ b/aws/data_source_aws_cloudwatch_event_connection.go @@ -0,0 +1,62 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + events "github.com/aws/aws-sdk-go/service/cloudwatchevents" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func dataSourceAwsCloudwatchEventConnection() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsCloudwatchEventConnectionRead, + + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "authorization_type": { + Type: schema.TypeString, + Computed: true, + }, + "secret_arn": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsCloudwatchEventConnectionRead(d *schema.ResourceData, meta interface{}) error { + d.SetId(d.Get("name").(string)) + + conn := meta.(*AWSClient).cloudwatcheventsconn + + input := &events.DescribeConnectionInput{ + Name: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Reading CloudWatchEvent connection (%s)", d.Id()) + output, err := conn.DescribeConnection(input) + if err != nil { + return fmt.Errorf("error getting CloudWatchEvent connection (%s): %w", d.Id(), err) + } + + if output == nil { + return fmt.Errorf("error getting CloudWatchEvent connection (%s): empty response", d.Id()) + } + + log.Printf("[DEBUG] Found CloudWatchEvent connection: %#v", *output) + d.Set("arn", output.ConnectionArn) + d.Set("secret_arn", output.SecretArn) + d.Set("name", output.Name) + d.Set("authorization_type", output.AuthorizationType) + return nil +} diff --git a/aws/data_source_aws_cloudwatch_event_connection_test.go b/aws/data_source_aws_cloudwatch_event_connection_test.go new file mode 100644 index 00000000000..c148d19bc25 --- /dev/null +++ b/aws/data_source_aws_cloudwatch_event_connection_test.go @@ -0,0 +1,52 @@ +package aws + +import ( + "testing" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccAWSDataSourceCloudwatch_Event_Connection_basic(t *testing.T) { + dataSourceName := "data.aws_cloudwatch_event_connection.test" + resourceName := "aws_cloudwatch_event_connection.api_key" + + name := acctest.RandomWithPrefix("tf-acc-test") + authorizationType := "API_KEY" + description := acctest.RandomWithPrefix("tf-acc-test") + key := acctest.RandomWithPrefix("tf-acc-test") + value := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t), + Providers: testAccProviders, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudwatch_Event_ConnectionDataConfig( + name, + description, + authorizationType, + key, + value, + ), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "secret_arn", resourceName, "secret_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "authorization_type", resourceName, "authorization_type"), + ), + }, + }, + }) +} + +func testAccAWSCloudwatch_Event_ConnectionDataConfig(name, description, authorizationType, key, value string) string { + return composeConfig( + testAccAWSCloudWatchEventConnectionConfig_apiKey(name, description, authorizationType, key, value), + ` +data "aws_cloudwatch_event_connection" "test" { + name = aws_cloudwatch_event_connection.api_key.name +} +`) +} diff --git a/aws/data_source_aws_default_tags.go b/aws/data_source_aws_default_tags.go new file mode 100644 index 00000000000..750359f41ed --- /dev/null +++ b/aws/data_source_aws_default_tags.go @@ -0,0 +1,36 @@ +package aws + +import ( + "fmt" + + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func dataSourceAwsDefaultTags() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsDefaultTagsRead, + + Schema: map[string]*schema.Schema{ + "tags": tagsSchemaComputed(), + }, + } +} + +func dataSourceAwsDefaultTagsRead(d *schema.ResourceData, meta interface{}) error { + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + d.SetId(meta.(*AWSClient).partition) + + tags := defaultTagsConfig.GetTags() + + if tags != nil { + if err := d.Set("tags", tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + } else { + d.Set("tags", nil) + } + + return nil +} diff --git a/aws/data_source_aws_default_tags_test.go b/aws/data_source_aws_default_tags_test.go new file mode 100644 index 00000000000..f7d1a0625b3 --- /dev/null +++ b/aws/data_source_aws_default_tags_test.go @@ -0,0 +1,122 @@ +package aws + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func TestAccAWSDefaultTagsDataSource_basic(t *testing.T) { + var providers []*schema.Provider + + dataSourceName := "data.aws_default_tags.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, ec2.EndpointsID), + ProviderFactories: testAccProviderFactoriesInternal(&providers), + CheckDestroy: nil, + Steps: []resource.TestStep{ + { + Config: composeConfig( + testAccAWSProviderConfigDefaultTags_Tags1("first", "value"), + testAccAWSDefaultTagsDataSource(), + ), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(dataSourceName, "tags.first", "value"), + ), + }, + }, + }) +} + +func TestAccAWSDefaultTagsDataSource_empty(t *testing.T) { + var providers []*schema.Provider + + dataSourceName := "data.aws_default_tags.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, ec2.EndpointsID), + ProviderFactories: testAccProviderFactoriesInternal(&providers), + CheckDestroy: nil, + Steps: []resource.TestStep{ + { + Config: composeConfig( + testAccAWSProviderConfigDefaultTags_Tags0(), + testAccAWSDefaultTagsDataSource(), + ), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "0"), + ), + }, + }, + }) +} + +func TestAccAWSDefaultTagsDataSource_multiple(t *testing.T) { + var providers []*schema.Provider + + dataSourceName := "data.aws_default_tags.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, ec2.EndpointsID), + ProviderFactories: testAccProviderFactoriesInternal(&providers), + CheckDestroy: nil, + Steps: []resource.TestStep{ + { + Config: composeConfig( + testAccAWSProviderConfigDefaultTags_Tags2("nuera", "hijo", "escalofrios", "calambres"), + testAccAWSDefaultTagsDataSource(), + ), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(dataSourceName, "tags.nuera", "hijo"), + resource.TestCheckResourceAttr(dataSourceName, "tags.escalofrios", "calambres"), + ), + }, + }, + }) +} + +func TestAccAWSDefaultTagsDataSource_ignore(t *testing.T) { + var providers []*schema.Provider + + dataSourceName := "data.aws_default_tags.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, ec2.EndpointsID), + ProviderFactories: testAccProviderFactoriesInternal(&providers), + CheckDestroy: nil, + Steps: []resource.TestStep{ + { + Config: composeConfig( + testAccAWSProviderConfigDefaultTags_Tags1("Tabac", "Louis Chiron"), + testAccAWSDefaultTagsDataSource(), + ), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(dataSourceName, "tags.Tabac", "Louis Chiron"), + ), + }, + { + Config: composeConfig( + testAccProviderConfigDefaultAndIgnoreTagsKeys1("Tabac", "Louis Chiron"), + testAccAWSDefaultTagsDataSource(), + ), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "0"), + ), + }, + }, + }) +} + +func testAccAWSDefaultTagsDataSource() string { + return `data "aws_default_tags" "test" {}` +} diff --git a/aws/data_source_aws_launch_configuration.go b/aws/data_source_aws_launch_configuration.go index 76b673b4eb5..88acbc0a0d3 100644 --- a/aws/data_source_aws_launch_configuration.go +++ b/aws/data_source_aws_launch_configuration.go @@ -105,7 +105,7 @@ func dataSourceAwsLaunchConfiguration() *schema.Resource { Computed: true, }, - "no_device": { + "encrypted": { Type: schema.TypeBool, Computed: true, }, @@ -115,11 +115,21 @@ func dataSourceAwsLaunchConfiguration() *schema.Resource { Computed: true, }, + "no_device": { + Type: schema.TypeBool, + Computed: true, + }, + "snapshot_id": { Type: schema.TypeString, Computed: true, }, + "throughput": { + Type: schema.TypeBool, + Computed: true, + }, + "volume_size": { Type: schema.TypeInt, Computed: true, @@ -129,11 +139,6 @@ func dataSourceAwsLaunchConfiguration() *schema.Resource { Type: schema.TypeString, Computed: true, }, - - "encrypted": { - Type: schema.TypeBool, - Computed: true, - }, }, }, }, @@ -197,6 +202,11 @@ func dataSourceAwsLaunchConfiguration() *schema.Resource { Computed: true, }, + "throughput": { + Type: schema.TypeBool, + Computed: true, + }, + "volume_size": { Type: schema.TypeInt, Computed: true, diff --git a/aws/data_source_aws_launch_template.go b/aws/data_source_aws_launch_template.go index 10b79878541..a4251784ce4 100644 --- a/aws/data_source_aws_launch_template.go +++ b/aws/data_source_aws_launch_template.go @@ -315,6 +315,10 @@ func dataSourceAwsLaunchTemplate() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "interface_type": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, diff --git a/aws/data_source_aws_launch_template_test.go b/aws/data_source_aws_launch_template_test.go index 84919d90e73..b98277cdfd8 100644 --- a/aws/data_source_aws_launch_template_test.go +++ b/aws/data_source_aws_launch_template_test.go @@ -303,7 +303,7 @@ func TestAccAWSLaunchTemplateDataSource_NonExistent(t *testing.T) { func testAccAWSLaunchTemplateDataSourceConfig_Basic(rName string) string { return fmt.Sprintf(` resource "aws_launch_template" "test" { - name = %q + name = %[1]q } data "aws_launch_template" "test" { @@ -315,7 +315,7 @@ data "aws_launch_template" "test" { func testAccAWSLaunchTemplateDataSourceConfig_BasicId(rName string) string { return fmt.Sprintf(` resource "aws_launch_template" "test" { - name = %q + name = %[1]q } data "aws_launch_template" "test" { diff --git a/aws/data_source_aws_mq_broker.go b/aws/data_source_aws_mq_broker.go index 7c9ad4d3a41..d7d8b9c4c60 100644 --- a/aws/data_source_aws_mq_broker.go +++ b/aws/data_source_aws_mq_broker.go @@ -6,6 +6,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/mq" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/experimental/nullable" "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" ) @@ -162,15 +163,7 @@ func dataSourceAwsMqBroker() *schema.Resource { }, "logs": { Type: schema.TypeList, - Optional: true, - MaxItems: 1, - // Ignore missing configuration block - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - if old == "1" && new == "0" { - return true - } - return false - }, + Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "general": { @@ -178,7 +171,7 @@ func dataSourceAwsMqBroker() *schema.Resource { Computed: true, }, "audit": { - Type: schema.TypeBool, + Type: nullable.TypeNullableBool, Computed: true, }, }, diff --git a/aws/data_source_aws_msk_cluster.go b/aws/data_source_aws_msk_cluster.go index c46c155e6a2..d6cd65c89f8 100644 --- a/aws/data_source_aws_msk_cluster.go +++ b/aws/data_source_aws_msk_cluster.go @@ -23,6 +23,10 @@ func dataSourceAwsMskCluster() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "bootstrap_brokers_sasl_iam": { + Type: schema.TypeString, + Computed: true, + }, "bootstrap_brokers_sasl_scram": { Type: schema.TypeString, Computed: true, @@ -104,6 +108,7 @@ func dataSourceAwsMskClusterRead(d *schema.ResourceData, meta interface{}) error d.Set("arn", cluster.ClusterArn) d.Set("bootstrap_brokers", sortMskClusterEndpoints(aws.StringValue(bootstrapBrokersOutput.BootstrapBrokerString))) + d.Set("bootstrap_brokers_sasl_iam", sortMskClusterEndpoints(aws.StringValue(bootstrapBrokersOutput.BootstrapBrokerStringSaslIam))) d.Set("bootstrap_brokers_sasl_scram", sortMskClusterEndpoints(aws.StringValue(bootstrapBrokersOutput.BootstrapBrokerStringSaslScram))) d.Set("bootstrap_brokers_tls", sortMskClusterEndpoints(aws.StringValue(bootstrapBrokersOutput.BootstrapBrokerStringTls))) d.Set("cluster_name", cluster.ClusterName) diff --git a/aws/data_source_aws_msk_cluster_test.go b/aws/data_source_aws_msk_cluster_test.go index 4ddfe159bc2..594f721bc17 100644 --- a/aws/data_source_aws_msk_cluster_test.go +++ b/aws/data_source_aws_msk_cluster_test.go @@ -39,7 +39,7 @@ func TestAccAWSMskClusterDataSource_Name(t *testing.T) { } func testAccMskClusterDataSourceConfigName(rName string) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_cluster" "test" { cluster_name = %[1]q kafka_version = "2.2.1" @@ -60,5 +60,5 @@ resource "aws_msk_cluster" "test" { data "aws_msk_cluster" "test" { cluster_name = aws_msk_cluster.test.cluster_name } -`, rName) +`, rName)) } diff --git a/aws/data_source_aws_servicecatalog_constraint.go b/aws/data_source_aws_servicecatalog_constraint.go new file mode 100644 index 00000000000..a8b29de52c5 --- /dev/null +++ b/aws/data_source_aws_servicecatalog_constraint.go @@ -0,0 +1,90 @@ +package aws + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + tfservicecatalog "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog/waiter" +) + +func dataSourceAwsServiceCatalogConstraint() *schema.Resource { + return &schema.Resource{ + Read: dataSourceAwsServiceCatalogConstraintRead, + + Schema: map[string]*schema.Schema{ + "accept_language": { + Type: schema.TypeString, + Optional: true, + Default: tfservicecatalog.AcceptLanguageEnglish, + ValidateFunc: validation.StringInSlice(tfservicecatalog.AcceptLanguage_Values(), false), + }, + "description": { + Type: schema.TypeString, + Optional: true, + Computed: true, + }, + "id": { + Type: schema.TypeString, + Required: true, + }, + "owner": { + Type: schema.TypeString, + Computed: true, + }, + "parameters": { + Type: schema.TypeString, + Computed: true, + }, + "portfolio_id": { + Type: schema.TypeString, + Computed: true, + }, + "product_id": { + Type: schema.TypeString, + Computed: true, + }, + "status": { + Type: schema.TypeString, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceAwsServiceCatalogConstraintRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).scconn + + output, err := waiter.ConstraintReady(conn, d.Get("accept_language").(string), d.Get("id").(string)) + + if err != nil { + return fmt.Errorf("error describing Service Catalog Constraint: %w", err) + } + + if output == nil || output.ConstraintDetail == nil { + return fmt.Errorf("error getting Service Catalog Constraint: empty response") + } + + d.Set("accept_language", d.Get("accept_language").(string)) + + d.Set("parameters", output.ConstraintParameters) + d.Set("status", output.Status) + + detail := output.ConstraintDetail + + d.Set("description", detail.Description) + d.Set("owner", detail.Owner) + d.Set("portfolio_id", detail.PortfolioId) + d.Set("product_id", detail.ProductId) + d.Set("type", detail.Type) + + d.SetId(aws.StringValue(detail.ConstraintId)) + + return nil +} diff --git a/aws/data_source_aws_servicecatalog_constraint_test.go b/aws/data_source_aws_servicecatalog_constraint_test.go new file mode 100644 index 00000000000..132f6cbf858 --- /dev/null +++ b/aws/data_source_aws_servicecatalog_constraint_test.go @@ -0,0 +1,46 @@ +package aws + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/servicecatalog" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func TestAccAWSServiceCatalogConstraintDataSource_basic(t *testing.T) { + resourceName := "aws_servicecatalog_constraint.test" + dataSourceName := "data.aws_servicecatalog_constraint.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, servicecatalog.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsServiceCatalogConstraintDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSServiceCatalogConstraintDataSourceConfig_basic(rName, rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsServiceCatalogConstraintExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "accept_language", dataSourceName, "accept_language"), + resource.TestCheckResourceAttrPair(resourceName, "description", dataSourceName, "description"), + resource.TestCheckResourceAttrPair(resourceName, "owner", dataSourceName, "owner"), + resource.TestCheckResourceAttrPair(resourceName, "parameters", dataSourceName, "parameters"), + resource.TestCheckResourceAttrPair(resourceName, "portfolio_id", dataSourceName, "portfolio_id"), + resource.TestCheckResourceAttrPair(resourceName, "product_id", dataSourceName, "product_id"), + resource.TestCheckResourceAttrPair(resourceName, "status", dataSourceName, "status"), + resource.TestCheckResourceAttrPair(resourceName, "type", dataSourceName, "type"), + ), + }, + }, + }) +} + +func testAccAWSServiceCatalogConstraintDataSourceConfig_basic(rName, description string) string { + return composeConfig(testAccAWSServiceCatalogConstraintConfig_basic(rName, description), ` +data "aws_servicecatalog_constraint" "test" { + id = aws_servicecatalog_constraint.test.id +} +`) +} diff --git a/aws/internal/keyvaluetags/generators/listtags/main.go b/aws/internal/keyvaluetags/generators/listtags/main.go index f7e35a5cd60..393ba549ef2 100644 --- a/aws/internal/keyvaluetags/generators/listtags/main.go +++ b/aws/internal/keyvaluetags/generators/listtags/main.go @@ -108,6 +108,7 @@ var serviceNames = []string{ "sagemaker", "securityhub", "servicediscovery", + "schemas", "sfn", "shield", "signer", diff --git a/aws/internal/keyvaluetags/generators/servicetags/main.go b/aws/internal/keyvaluetags/generators/servicetags/main.go index 39c30ad7baa..8775ca80700 100644 --- a/aws/internal/keyvaluetags/generators/servicetags/main.go +++ b/aws/internal/keyvaluetags/generators/servicetags/main.go @@ -144,6 +144,7 @@ var mapServiceNames = []string{ "pinpoint", "resourcegroups", "securityhub", + "schemas", "signer", "sqs", "synthetics", diff --git a/aws/internal/keyvaluetags/generators/updatetags/main.go b/aws/internal/keyvaluetags/generators/updatetags/main.go index d330cbdf48e..c302a3b7da1 100644 --- a/aws/internal/keyvaluetags/generators/updatetags/main.go +++ b/aws/internal/keyvaluetags/generators/updatetags/main.go @@ -115,6 +115,7 @@ var serviceNames = []string{ "secretsmanager", "securityhub", "servicediscovery", + "schemas", "sfn", "shield", "signer", diff --git a/aws/internal/keyvaluetags/list_tags_gen.go b/aws/internal/keyvaluetags/list_tags_gen.go index 4b2688e7c3f..0b527c833fa 100644 --- a/aws/internal/keyvaluetags/list_tags_gen.go +++ b/aws/internal/keyvaluetags/list_tags_gen.go @@ -93,6 +93,7 @@ import ( "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/route53resolver" "github.com/aws/aws-sdk-go/service/sagemaker" + "github.com/aws/aws-sdk-go/service/schemas" "github.com/aws/aws-sdk-go/service/securityhub" "github.com/aws/aws-sdk-go/service/servicediscovery" "github.com/aws/aws-sdk-go/service/sfn" @@ -1638,6 +1639,23 @@ func SagemakerListTags(conn *sagemaker.SageMaker, identifier string) (KeyValueTa return SagemakerKeyValueTags(output.Tags), nil } +// SchemasListTags lists schemas service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func SchemasListTags(conn *schemas.Schemas, identifier string) (KeyValueTags, error) { + input := &schemas.ListTagsForResourceInput{ + ResourceArn: aws.String(identifier), + } + + output, err := conn.ListTagsForResource(input) + + if err != nil { + return New(nil), err + } + + return SchemasKeyValueTags(output.Tags), nil +} + // SecurityhubListTags lists securityhub service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. diff --git a/aws/internal/keyvaluetags/service_generation_customizations.go b/aws/internal/keyvaluetags/service_generation_customizations.go index 34819c2b764..f1f03359a5a 100644 --- a/aws/internal/keyvaluetags/service_generation_customizations.go +++ b/aws/internal/keyvaluetags/service_generation_customizations.go @@ -103,6 +103,7 @@ import ( "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/route53resolver" "github.com/aws/aws-sdk-go/service/sagemaker" + "github.com/aws/aws-sdk-go/service/schemas" "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/aws/aws-sdk-go/service/securityhub" "github.com/aws/aws-sdk-go/service/servicediscovery" @@ -333,6 +334,8 @@ func ServiceClientType(serviceName string) string { funcType = reflect.TypeOf(securityhub.New) case "servicediscovery": funcType = reflect.TypeOf(servicediscovery.New) + case "schemas": + funcType = reflect.TypeOf(schemas.New) case "sfn": funcType = reflect.TypeOf(sfn.New) case "shield": diff --git a/aws/internal/keyvaluetags/service_tags_gen.go b/aws/internal/keyvaluetags/service_tags_gen.go index 2abcf13632a..7a2dad640d0 100644 --- a/aws/internal/keyvaluetags/service_tags_gen.go +++ b/aws/internal/keyvaluetags/service_tags_gen.go @@ -447,6 +447,16 @@ func ResourcegroupsKeyValueTags(tags map[string]*string) KeyValueTags { return New(tags) } +// SchemasTags returns schemas service tags. +func (tags KeyValueTags) SchemasTags() map[string]*string { + return aws.StringMap(tags.Map()) +} + +// SchemasKeyValueTags creates KeyValueTags from schemas service tags. +func SchemasKeyValueTags(tags map[string]*string) KeyValueTags { + return New(tags) +} + // SecurityhubTags returns securityhub service tags. func (tags KeyValueTags) SecurityhubTags() map[string]*string { return aws.StringMap(tags.Map()) diff --git a/aws/internal/keyvaluetags/update_tags_gen.go b/aws/internal/keyvaluetags/update_tags_gen.go index 19f2c314012..6a2a9d37f19 100644 --- a/aws/internal/keyvaluetags/update_tags_gen.go +++ b/aws/internal/keyvaluetags/update_tags_gen.go @@ -101,6 +101,7 @@ import ( "github.com/aws/aws-sdk-go/service/route53" "github.com/aws/aws-sdk-go/service/route53resolver" "github.com/aws/aws-sdk-go/service/sagemaker" + "github.com/aws/aws-sdk-go/service/schemas" "github.com/aws/aws-sdk-go/service/secretsmanager" "github.com/aws/aws-sdk-go/service/securityhub" "github.com/aws/aws-sdk-go/service/servicediscovery" @@ -3545,6 +3546,42 @@ func SagemakerUpdateTags(conn *sagemaker.SageMaker, identifier string, oldTagsMa return nil } +// SchemasUpdateTags updates schemas service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func SchemasUpdateTags(conn *schemas.Schemas, identifier string, oldTagsMap interface{}, newTagsMap interface{}) error { + oldTags := New(oldTagsMap) + newTags := New(newTagsMap) + + if removedTags := oldTags.Removed(newTags); len(removedTags) > 0 { + input := &schemas.UntagResourceInput{ + ResourceArn: aws.String(identifier), + TagKeys: aws.StringSlice(removedTags.IgnoreAws().Keys()), + } + + _, err := conn.UntagResource(input) + + if err != nil { + return fmt.Errorf("error untagging resource (%s): %w", identifier, err) + } + } + + if updatedTags := oldTags.Updated(newTags); len(updatedTags) > 0 { + input := &schemas.TagResourceInput{ + ResourceArn: aws.String(identifier), + Tags: updatedTags.IgnoreAws().SchemasTags(), + } + + _, err := conn.TagResource(input) + + if err != nil { + return fmt.Errorf("error tagging resource (%s): %w", identifier, err) + } + } + + return nil +} + // SecretsmanagerUpdateTags updates secretsmanager service tags. // The identifier is typically the Amazon Resource Name (ARN), although // it may also be a different identifier depending on the service. diff --git a/aws/internal/service/amplify/consts.go b/aws/internal/service/amplify/consts.go new file mode 100644 index 00000000000..3049f2bcfc2 --- /dev/null +++ b/aws/internal/service/amplify/consts.go @@ -0,0 +1,5 @@ +package amplify + +const ( + StageNone = "NONE" +) diff --git a/aws/internal/service/amplify/finder/finder.go b/aws/internal/service/amplify/finder/finder.go new file mode 100644 index 00000000000..9440c53a1f0 --- /dev/null +++ b/aws/internal/service/amplify/finder/finder.go @@ -0,0 +1,151 @@ +package finder + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func AppByID(conn *amplify.Amplify, id string) (*amplify.App, error) { + input := &lify.GetAppInput{ + AppId: aws.String(id), + } + + output, err := conn.GetApp(input) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.App == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output.App, nil +} + +func BackendEnvironmentByAppIDAndEnvironmentName(conn *amplify.Amplify, appID, environmentName string) (*amplify.BackendEnvironment, error) { + input := &lify.GetBackendEnvironmentInput{ + AppId: aws.String(appID), + EnvironmentName: aws.String(environmentName), + } + + output, err := conn.GetBackendEnvironment(input) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.BackendEnvironment == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output.BackendEnvironment, nil +} + +func BranchByAppIDAndBranchName(conn *amplify.Amplify, appID, branchName string) (*amplify.Branch, error) { + input := &lify.GetBranchInput{ + AppId: aws.String(appID), + BranchName: aws.String(branchName), + } + + output, err := conn.GetBranch(input) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Branch == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output.Branch, nil +} + +func DomainAssociationByAppIDAndDomainName(conn *amplify.Amplify, appID, domainName string) (*amplify.DomainAssociation, error) { + input := &lify.GetDomainAssociationInput{ + AppId: aws.String(appID), + DomainName: aws.String(domainName), + } + + output, err := conn.GetDomainAssociation(input) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.DomainAssociation == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output.DomainAssociation, nil +} + +func WebhookByID(conn *amplify.Amplify, id string) (*amplify.Webhook, error) { + input := &lify.GetWebhookInput{ + WebhookId: aws.String(id), + } + + output, err := conn.GetWebhook(input) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Webhook == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output.Webhook, nil +} diff --git a/aws/internal/service/amplify/id.go b/aws/internal/service/amplify/id.go new file mode 100644 index 00000000000..20bb749cbb0 --- /dev/null +++ b/aws/internal/service/amplify/id.go @@ -0,0 +1,63 @@ +package amplify + +import ( + "fmt" + "strings" +) + +const backendEnvironmentResourceIDSeparator = "/" + +func BackendEnvironmentCreateResourceID(appID, environmentName string) string { + parts := []string{appID, environmentName} + id := strings.Join(parts, backendEnvironmentResourceIDSeparator) + + return id +} + +func BackendEnvironmentParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, backendEnvironmentResourceIDSeparator) + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil + } + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected APPID%[2]sENVIRONMENTNAME", id, backendEnvironmentResourceIDSeparator) +} + +const branchResourceIDSeparator = "/" + +func BranchCreateResourceID(appID, branchName string) string { + parts := []string{appID, branchName} + id := strings.Join(parts, branchResourceIDSeparator) + + return id +} + +func BranchParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, branchResourceIDSeparator) + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil + } + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected APPID%[2]sBRANCHNAME", id, branchResourceIDSeparator) +} + +const domainAssociationResourceIDSeparator = "/" + +func DomainAssociationCreateResourceID(appID, domainName string) string { + parts := []string{appID, domainName} + id := strings.Join(parts, domainAssociationResourceIDSeparator) + + return id +} + +func DomainAssociationParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, domainAssociationResourceIDSeparator) + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil + } + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected APPID%[2]sDOMAINNAME", id, domainAssociationResourceIDSeparator) +} diff --git a/aws/internal/service/amplify/lister/list.go b/aws/internal/service/amplify/lister/list.go new file mode 100644 index 00000000000..3b7007c9979 --- /dev/null +++ b/aws/internal/service/amplify/lister/list.go @@ -0,0 +1,3 @@ +//go:generate go run ../../../generators/listpages/main.go -function=ListApps github.com/aws/aws-sdk-go/service/amplify + +package lister diff --git a/aws/internal/service/amplify/lister/list_pages_gen.go b/aws/internal/service/amplify/lister/list_pages_gen.go new file mode 100644 index 00000000000..30fa4b12930 --- /dev/null +++ b/aws/internal/service/amplify/lister/list_pages_gen.go @@ -0,0 +1,31 @@ +// Code generated by "aws/internal/generators/listpages/main.go -function=ListApps github.com/aws/aws-sdk-go/service/amplify"; DO NOT EDIT. + +package lister + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" +) + +func ListAppsPages(conn *amplify.Amplify, input *amplify.ListAppsInput, fn func(*amplify.ListAppsOutput, bool) bool) error { + return ListAppsPagesWithContext(context.Background(), conn, input, fn) +} + +func ListAppsPagesWithContext(ctx context.Context, conn *amplify.Amplify, input *amplify.ListAppsInput, fn func(*amplify.ListAppsOutput, bool) bool) error { + for { + output, err := conn.ListAppsWithContext(ctx, input) + if err != nil { + return err + } + + lastPage := aws.StringValue(output.NextToken) == "" + if !fn(output, lastPage) || lastPage { + break + } + + input.NextToken = output.NextToken + } + return nil +} diff --git a/aws/internal/service/amplify/waiter/status.go b/aws/internal/service/amplify/waiter/status.go new file mode 100644 index 00000000000..49be5ec89c8 --- /dev/null +++ b/aws/internal/service/amplify/waiter/status.go @@ -0,0 +1,25 @@ +package waiter + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func DomainAssociationStatus(conn *amplify.Amplify, appID, domainName string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + domainAssociation, err := finder.DomainAssociationByAppIDAndDomainName(conn, appID, domainName) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return domainAssociation, aws.StringValue(domainAssociation.DomainStatus), nil + } +} diff --git a/aws/internal/service/amplify/waiter/waiter.go b/aws/internal/service/amplify/waiter/waiter.go new file mode 100644 index 00000000000..08abc2675ac --- /dev/null +++ b/aws/internal/service/amplify/waiter/waiter.go @@ -0,0 +1,58 @@ +package waiter + +import ( + "errors" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +const ( + DomainAssociationCreatedTimeout = 5 * time.Minute + DomainAssociationVerifiedTimeout = 15 * time.Minute +) + +func DomainAssociationCreated(conn *amplify.Amplify, appID, domainName string) (*amplify.DomainAssociation, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{amplify.DomainStatusCreating, amplify.DomainStatusInProgress, amplify.DomainStatusRequestingCertificate}, + Target: []string{amplify.DomainStatusPendingVerification, amplify.DomainStatusPendingDeployment, amplify.DomainStatusAvailable}, + Refresh: DomainAssociationStatus(conn, appID, domainName), + Timeout: DomainAssociationCreatedTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if v, ok := outputRaw.(*amplify.DomainAssociation); ok { + if v != nil && aws.StringValue(v.DomainStatus) == amplify.DomainStatusFailed { + tfresource.SetLastError(err, errors.New(aws.StringValue(v.StatusReason))) + } + + return v, err + } + + return nil, err +} + +func DomainAssociationVerified(conn *amplify.Amplify, appID, domainName string) (*amplify.DomainAssociation, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{amplify.DomainStatusUpdating, amplify.DomainStatusInProgress, amplify.DomainStatusPendingVerification}, + Target: []string{amplify.DomainStatusPendingDeployment, amplify.DomainStatusAvailable}, + Refresh: DomainAssociationStatus(conn, appID, domainName), + Timeout: DomainAssociationVerifiedTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if v, ok := outputRaw.(*amplify.DomainAssociation); ok { + if v != nil && aws.StringValue(v.DomainStatus) == amplify.DomainStatusFailed { + tfresource.SetLastError(err, errors.New(aws.StringValue(v.StatusReason))) + } + + return v, err + } + + return nil, err +} diff --git a/aws/internal/service/cloudwatch/finder/finder.go b/aws/internal/service/cloudwatch/finder/finder.go index 1de5bf2c9d6..e78cf2ca913 100644 --- a/aws/internal/service/cloudwatch/finder/finder.go +++ b/aws/internal/service/cloudwatch/finder/finder.go @@ -24,3 +24,21 @@ func CompositeAlarmByName(ctx context.Context, conn *cloudwatch.CloudWatch, name return output.CompositeAlarms[0], nil } + +func MetricAlarmByName(conn *cloudwatch.CloudWatch, name string) (*cloudwatch.MetricAlarm, error) { + input := cloudwatch.DescribeAlarmsInput{ + AlarmNames: []*string{aws.String(name)}, + AlarmTypes: aws.StringSlice([]string{cloudwatch.AlarmTypeMetricAlarm}), + } + + output, err := conn.DescribeAlarms(&input) + if err != nil { + return nil, err + } + + if output == nil || len(output.MetricAlarms) != 1 { + return nil, nil + } + + return output.MetricAlarms[0], nil +} diff --git a/aws/internal/service/cloudwatchevents/finder/finder.go b/aws/internal/service/cloudwatchevents/finder/finder.go index 14b1829448c..408979e14e7 100644 --- a/aws/internal/service/cloudwatchevents/finder/finder.go +++ b/aws/internal/service/cloudwatchevents/finder/finder.go @@ -5,10 +5,40 @@ import ( "github.com/aws/aws-sdk-go/aws" events "github.com/aws/aws-sdk-go/service/cloudwatchevents" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" tfevents "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/cloudwatchevents" "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/cloudwatchevents/lister" ) +func ConnectionByName(conn *events.CloudWatchEvents, name string) (*events.DescribeConnectionOutput, error) { + input := &events.DescribeConnectionInput{ + Name: aws.String(name), + } + + output, err := conn.DescribeConnection(input) + + if tfawserr.ErrCodeEquals(err, events.ErrCodeResourceNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output, nil +} + func Rule(conn *events.CloudWatchEvents, eventBusName, ruleName string) (*events.DescribeRuleOutput, error) { input := events.DescribeRuleInput{ Name: aws.String(ruleName), diff --git a/aws/internal/service/cloudwatchevents/waiter/status.go b/aws/internal/service/cloudwatchevents/waiter/status.go new file mode 100644 index 00000000000..7201c0fee79 --- /dev/null +++ b/aws/internal/service/cloudwatchevents/waiter/status.go @@ -0,0 +1,25 @@ +package waiter + +import ( + "github.com/aws/aws-sdk-go/aws" + events "github.com/aws/aws-sdk-go/service/cloudwatchevents" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/cloudwatchevents/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func ConnectionState(conn *events.CloudWatchEvents, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := finder.ConnectionByName(conn, name) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.ConnectionState), nil + } +} diff --git a/aws/internal/service/cloudwatchevents/waiter/waiter.go b/aws/internal/service/cloudwatchevents/waiter/waiter.go new file mode 100644 index 00000000000..834d746a5ea --- /dev/null +++ b/aws/internal/service/cloudwatchevents/waiter/waiter.go @@ -0,0 +1,65 @@ +package waiter + +import ( + "time" + + events "github.com/aws/aws-sdk-go/service/cloudwatchevents" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +const ( + ConnectionCreatedTimeout = 2 * time.Minute + ConnectionDeletedTimeout = 2 * time.Minute + ConnectionUpdatedTimeout = 2 * time.Minute +) + +func ConnectionCreated(conn *events.CloudWatchEvents, id string) (*events.DescribeConnectionOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{events.ConnectionStateCreating, events.ConnectionStateAuthorizing}, + Target: []string{events.ConnectionStateAuthorized, events.ConnectionStateDeauthorized}, + Refresh: ConnectionState(conn, id), + Timeout: ConnectionCreatedTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if v, ok := outputRaw.(*events.DescribeConnectionOutput); ok { + return v, err + } + + return nil, err +} + +func ConnectionDeleted(conn *events.CloudWatchEvents, id string) (*events.DescribeConnectionOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{events.ConnectionStateDeleting}, + Target: []string{}, + Refresh: ConnectionState(conn, id), + Timeout: ConnectionDeletedTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if v, ok := outputRaw.(*events.DescribeConnectionOutput); ok { + return v, err + } + + return nil, err +} + +func ConnectionUpdated(conn *events.CloudWatchEvents, id string) (*events.DescribeConnectionOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{events.ConnectionStateUpdating, events.ConnectionStateAuthorizing, events.ConnectionStateDeauthorizing}, + Target: []string{events.ConnectionStateAuthorized, events.ConnectionStateDeauthorized}, + Refresh: ConnectionState(conn, id), + Timeout: ConnectionUpdatedTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if v, ok := outputRaw.(*events.DescribeConnectionOutput); ok { + return v, err + } + + return nil, err +} diff --git a/aws/internal/service/msk/waiter/waiter.go b/aws/internal/service/msk/waiter/waiter.go new file mode 100644 index 00000000000..55b8c8a6997 --- /dev/null +++ b/aws/internal/service/msk/waiter/waiter.go @@ -0,0 +1,11 @@ +package waiter + +import ( + "time" +) + +const ( + ClusterCreateTimeout = 120 * time.Minute + ClusterUpdateTimeout = 120 * time.Minute + ClusterDeleteTimeout = 120 * time.Minute +) diff --git a/aws/internal/service/schemas/finder/finder.go b/aws/internal/service/schemas/finder/finder.go new file mode 100644 index 00000000000..c2511a294e7 --- /dev/null +++ b/aws/internal/service/schemas/finder/finder.go @@ -0,0 +1,93 @@ +package finder + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +func DiscovererByID(conn *schemas.Schemas, id string) (*schemas.DescribeDiscovererOutput, error) { + input := &schemas.DescribeDiscovererInput{ + DiscovererId: aws.String(id), + } + + output, err := conn.DescribeDiscoverer(input) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output, nil +} + +func RegistryByName(conn *schemas.Schemas, name string) (*schemas.DescribeRegistryOutput, error) { + input := &schemas.DescribeRegistryInput{ + RegistryName: aws.String(name), + } + + output, err := conn.DescribeRegistry(input) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output, nil +} + +func SchemaByNameAndRegistryName(conn *schemas.Schemas, name, registryName string) (*schemas.DescribeSchemaOutput, error) { + input := &schemas.DescribeSchemaInput{ + RegistryName: aws.String(registryName), + SchemaName: aws.String(name), + } + + output, err := conn.DescribeSchema(input) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, &resource.NotFoundError{ + Message: "Empty result", + LastRequest: input, + } + } + + return output, nil +} diff --git a/aws/internal/service/schemas/id.go b/aws/internal/service/schemas/id.go new file mode 100644 index 00000000000..71d0435c119 --- /dev/null +++ b/aws/internal/service/schemas/id.go @@ -0,0 +1,25 @@ +package transfer + +import ( + "fmt" + "strings" +) + +const schemaResourceIDSeparator = "/" + +func SchemaCreateResourceID(schemaName, registryName string) string { + parts := []string{schemaName, registryName} + id := strings.Join(parts, schemaResourceIDSeparator) + + return id +} + +func SchemaParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, schemaResourceIDSeparator) + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil + } + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected SCHEMA_NAME%[2]sREGISTRY_NAME", id, schemaResourceIDSeparator) +} diff --git a/aws/internal/service/servicecatalog/enum.go b/aws/internal/service/servicecatalog/enum.go index d8b8db7bc8e..90f3ea3ccfe 100644 --- a/aws/internal/service/servicecatalog/enum.go +++ b/aws/internal/service/servicecatalog/enum.go @@ -3,9 +3,9 @@ package servicecatalog const ( // If AWS adds these to the API, we should use those and remove these. - ServiceCatalogAcceptLanguageEnglish = "en" - ServiceCatalogAcceptLanguageJapanese = "jp" - ServiceCatalogAcceptLanguageChinese = "zh" + AcceptLanguageEnglish = "en" + AcceptLanguageJapanese = "jp" + AcceptLanguageChinese = "zh" ConstraintTypeLaunch = "LAUNCH" ConstraintTypeNotification = "NOTIFICATION" @@ -16,9 +16,9 @@ const ( func AcceptLanguage_Values() []string { return []string{ - ServiceCatalogAcceptLanguageEnglish, - ServiceCatalogAcceptLanguageJapanese, - ServiceCatalogAcceptLanguageChinese, + AcceptLanguageEnglish, + AcceptLanguageJapanese, + AcceptLanguageChinese, } } diff --git a/aws/internal/service/servicecatalog/finder/finder.go b/aws/internal/service/servicecatalog/finder/finder.go index 252ddc8a86d..0cf7c7496b8 100644 --- a/aws/internal/service/servicecatalog/finder/finder.go +++ b/aws/internal/service/servicecatalog/finder/finder.go @@ -69,3 +69,94 @@ func ProductPortfolioAssociation(conn *servicecatalog.ServiceCatalog, acceptLang return result, err } + +func BudgetResourceAssociation(conn *servicecatalog.ServiceCatalog, budgetName, resourceID string) (*servicecatalog.BudgetDetail, error) { + input := &servicecatalog.ListBudgetsForResourceInput{ + ResourceId: aws.String(resourceID), + } + + var result *servicecatalog.BudgetDetail + + err := conn.ListBudgetsForResourcePages(input, func(page *servicecatalog.ListBudgetsForResourceOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, budget := range page.Budgets { + if budget == nil { + continue + } + + if aws.StringValue(budget.BudgetName) == budgetName { + result = budget + return false + } + } + + return !lastPage + }) + + return result, err +} + +func TagOptionResourceAssociation(conn *servicecatalog.ServiceCatalog, tagOptionID, resourceID string) (*servicecatalog.ResourceDetail, error) { + input := &servicecatalog.ListResourcesForTagOptionInput{ + TagOptionId: aws.String(tagOptionID), + } + + var result *servicecatalog.ResourceDetail + + err := conn.ListResourcesForTagOptionPages(input, func(page *servicecatalog.ListResourcesForTagOptionOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, deet := range page.ResourceDetails { + if deet == nil { + continue + } + + if aws.StringValue(deet.Id) == resourceID { + result = deet + return false + } + } + + return !lastPage + }) + + return result, err +} + +func PrincipalPortfolioAssociation(conn *servicecatalog.ServiceCatalog, acceptLanguage, principalARN, portfolioID string) (*servicecatalog.Principal, error) { + input := &servicecatalog.ListPrincipalsForPortfolioInput{ + PortfolioId: aws.String(portfolioID), + } + + if acceptLanguage != "" { + input.AcceptLanguage = aws.String(acceptLanguage) + } + + var result *servicecatalog.Principal + + err := conn.ListPrincipalsForPortfolioPages(input, func(page *servicecatalog.ListPrincipalsForPortfolioOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, deet := range page.Principals { + if deet == nil { + continue + } + + if aws.StringValue(deet.PrincipalARN) == principalARN { + result = deet + return false + } + } + + return !lastPage + }) + + return result, err +} diff --git a/aws/internal/service/servicecatalog/id.go b/aws/internal/service/servicecatalog/id.go index 0c545813232..3eb0a6f8a41 100644 --- a/aws/internal/service/servicecatalog/id.go +++ b/aws/internal/service/servicecatalog/id.go @@ -32,3 +32,58 @@ func ProductPortfolioAssociationParseID(id string) (string, string, string, erro func ProductPortfolioAssociationCreateID(acceptLanguage, portfolioID, productID string) string { return strings.Join([]string{acceptLanguage, portfolioID, productID}, ":") } + +func BudgetResourceAssociationParseID(id string) (string, string, error) { + parts := strings.SplitN(id, ":", 2) + + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), budgetName:resourceID", id) + } + + return parts[0], parts[1], nil +} + +func BudgetResourceAssociationID(budgetName, resourceID string) string { + return strings.Join([]string{budgetName, resourceID}, ":") +} + +func TagOptionResourceAssociationParseID(id string) (string, string, error) { + parts := strings.SplitN(id, ":", 2) + + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), tagOptionID:resourceID", id) + } + + return parts[0], parts[1], nil +} + +func TagOptionResourceAssociationID(tagOptionID, resourceID string) string { + return strings.Join([]string{tagOptionID, resourceID}, ":") +} + +func ProvisioningArtifactID(artifactID, productID string) string { + return strings.Join([]string{artifactID, productID}, ":") +} + +func ProvisioningArtifactParseID(id string) (string, string, error) { + parts := strings.SplitN(id, ":", 2) + + if len(parts) != 2 || parts[0] == "" || parts[1] == "" { + return "", "", fmt.Errorf("unexpected format of ID (%s), expected artifactID:productID", id) + } + return parts[0], parts[1], nil +} + +func PrincipalPortfolioAssociationParseID(id string) (string, string, string, error) { + parts := strings.SplitN(id, ",", 3) + + if len(parts) != 3 || parts[0] == "" || parts[1] == "" || parts[2] == "" { + return "", "", "", fmt.Errorf("unexpected format of ID (%s), expected acceptLanguage,principalARN,portfolioID", id) + } + + return parts[0], parts[1], parts[2], nil +} + +func PrincipalPortfolioAssociationID(acceptLanguage, principalARN, portfolioID string) string { + return strings.Join([]string{acceptLanguage, principalARN, portfolioID}, ",") +} diff --git a/aws/internal/service/servicecatalog/waiter/status.go b/aws/internal/service/servicecatalog/waiter/status.go index fa895722c72..338e821b70b 100644 --- a/aws/internal/service/servicecatalog/waiter/status.go +++ b/aws/internal/service/servicecatalog/waiter/status.go @@ -226,3 +226,96 @@ func ServiceActionStatus(conn *servicecatalog.ServiceCatalog, acceptLanguage, id return output.ServiceActionDetail, servicecatalog.StatusAvailable, nil } } + +func BudgetResourceAssociationStatus(conn *servicecatalog.ServiceCatalog, budgetName, resourceID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := finder.BudgetResourceAssociation(conn, budgetName, resourceID) + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil, StatusNotFound, &resource.NotFoundError{ + Message: fmt.Sprintf("tag option resource association not found (%s): %s", tfservicecatalog.BudgetResourceAssociationID(budgetName, resourceID), err), + } + } + + if err != nil { + return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing tag option resource association: %w", err) + } + + if output == nil { + return nil, StatusNotFound, &resource.NotFoundError{ + Message: fmt.Sprintf("finding tag option resource association (%s): empty response", tfservicecatalog.BudgetResourceAssociationID(budgetName, resourceID)), + } + } + + return output, servicecatalog.StatusAvailable, err + } +} + +func TagOptionResourceAssociationStatus(conn *servicecatalog.ServiceCatalog, tagOptionID, resourceID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := finder.TagOptionResourceAssociation(conn, tagOptionID, resourceID) + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil, StatusNotFound, &resource.NotFoundError{ + Message: fmt.Sprintf("tag option resource association not found (%s): %s", tfservicecatalog.TagOptionResourceAssociationID(tagOptionID, resourceID), err), + } + } + + if err != nil { + return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing tag option resource association: %w", err) + } + + if output == nil { + return nil, StatusNotFound, &resource.NotFoundError{ + Message: fmt.Sprintf("finding tag option resource association (%s): empty response", tfservicecatalog.TagOptionResourceAssociationID(tagOptionID, resourceID)), + } + } + + return output, servicecatalog.StatusAvailable, err + } +} + +func ProvisioningArtifactStatus(conn *servicecatalog.ServiceCatalog, id, productID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + input := &servicecatalog.DescribeProvisioningArtifactInput{ + ProvisioningArtifactId: aws.String(id), + ProductId: aws.String(productID), + } + + output, err := conn.DescribeProvisioningArtifact(input) + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil, StatusNotFound, err + } + + if err != nil { + return nil, servicecatalog.StatusFailed, err + } + + if output == nil || output.ProvisioningArtifactDetail == nil { + return nil, StatusUnavailable, err + } + + return output, aws.StringValue(output.Status), err + } +} + +func PrincipalPortfolioAssociationStatus(conn *servicecatalog.ServiceCatalog, acceptLanguage, principalARN, portfolioID string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := finder.PrincipalPortfolioAssociation(conn, acceptLanguage, principalARN, portfolioID) + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil, StatusNotFound, err + } + + if err != nil { + return nil, servicecatalog.StatusFailed, fmt.Errorf("error describing principal portfolio association: %w", err) + } + + if output == nil { + return nil, StatusNotFound, err + } + + return output, servicecatalog.StatusAvailable, err + } +} diff --git a/aws/internal/service/servicecatalog/waiter/waiter.go b/aws/internal/service/servicecatalog/waiter/waiter.go index 4d58948483a..b388ff87ff6 100644 --- a/aws/internal/service/servicecatalog/waiter/waiter.go +++ b/aws/internal/service/servicecatalog/waiter/waiter.go @@ -29,11 +29,23 @@ const ( ServiceActionReadyTimeout = 3 * time.Minute ServiceActionDeleteTimeout = 3 * time.Minute + BudgetResourceAssociationReadyTimeout = 3 * time.Minute + BudgetResourceAssociationDeleteTimeout = 3 * time.Minute + + TagOptionResourceAssociationReadyTimeout = 3 * time.Minute + TagOptionResourceAssociationDeleteTimeout = 3 * time.Minute + + ProvisioningArtifactReadyTimeout = 3 * time.Minute + ProvisioningArtifactDeletedTimeout = 3 * time.Minute + + PrincipalPortfolioAssociationReadyTimeout = 3 * time.Minute + PrincipalPortfolioAssociationDeleteTimeout = 3 * time.Minute + StatusNotFound = "NOT_FOUND" StatusUnavailable = "UNAVAILABLE" // AWS documentation is wrong, says that status will be "AVAILABLE" but it is actually "CREATED" - ProductStatusCreated = "CREATED" + StatusCreated = "CREATED" OrganizationAccessStatusError = "ERROR" ) @@ -41,7 +53,7 @@ const ( func ProductReady(conn *servicecatalog.ServiceCatalog, acceptLanguage, productID string) (*servicecatalog.DescribeProductAsAdminOutput, error) { stateConf := &resource.StateChangeConf{ Pending: []string{servicecatalog.StatusCreating, StatusNotFound, StatusUnavailable}, - Target: []string{servicecatalog.StatusAvailable, ProductStatusCreated}, + Target: []string{servicecatalog.StatusAvailable, StatusCreated}, Refresh: ProductStatus(conn, acceptLanguage, productID), Timeout: ProductReadyTimeout, } @@ -57,7 +69,7 @@ func ProductReady(conn *servicecatalog.ServiceCatalog, acceptLanguage, productID func ProductDeleted(conn *servicecatalog.ServiceCatalog, acceptLanguage, productID string) (*servicecatalog.DescribeProductAsAdminOutput, error) { stateConf := &resource.StateChangeConf{ - Pending: []string{servicecatalog.StatusCreating, servicecatalog.StatusAvailable, ProductStatusCreated, StatusUnavailable}, + Pending: []string{servicecatalog.StatusCreating, servicecatalog.StatusAvailable, StatusCreated, StatusUnavailable}, Target: []string{StatusNotFound}, Refresh: ProductStatus(conn, acceptLanguage, productID), Timeout: ProductDeleteTimeout, @@ -300,3 +312,134 @@ func ServiceActionDeleted(conn *servicecatalog.ServiceCatalog, acceptLanguage, i return err } + +func BudgetResourceAssociationReady(conn *servicecatalog.ServiceCatalog, budgetName, resourceID string) (*servicecatalog.BudgetDetail, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{StatusNotFound, StatusUnavailable}, + Target: []string{servicecatalog.StatusAvailable}, + Refresh: BudgetResourceAssociationStatus(conn, budgetName, resourceID), + Timeout: BudgetResourceAssociationReadyTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*servicecatalog.BudgetDetail); ok { + return output, err + } + + return nil, err +} + +func BudgetResourceAssociationDeleted(conn *servicecatalog.ServiceCatalog, budgetName, resourceID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{servicecatalog.StatusAvailable}, + Target: []string{StatusNotFound, StatusUnavailable}, + Refresh: BudgetResourceAssociationStatus(conn, budgetName, resourceID), + Timeout: BudgetResourceAssociationDeleteTimeout, + } + + _, err := stateConf.WaitForState() + + return err +} + +func TagOptionResourceAssociationReady(conn *servicecatalog.ServiceCatalog, tagOptionID, resourceID string) (*servicecatalog.ResourceDetail, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{StatusNotFound, StatusUnavailable}, + Target: []string{servicecatalog.StatusAvailable}, + Refresh: TagOptionResourceAssociationStatus(conn, tagOptionID, resourceID), + Timeout: TagOptionResourceAssociationReadyTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*servicecatalog.ResourceDetail); ok { + return output, err + } + + return nil, err +} + +func TagOptionResourceAssociationDeleted(conn *servicecatalog.ServiceCatalog, tagOptionID, resourceID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{servicecatalog.StatusAvailable}, + Target: []string{StatusNotFound, StatusUnavailable}, + Refresh: TagOptionResourceAssociationStatus(conn, tagOptionID, resourceID), + Timeout: TagOptionResourceAssociationDeleteTimeout, + } + + _, err := stateConf.WaitForState() + + return err +} + +func ProvisioningArtifactReady(conn *servicecatalog.ServiceCatalog, id, productID string) (*servicecatalog.DescribeProvisioningArtifactOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{servicecatalog.StatusCreating, StatusNotFound, StatusUnavailable}, + Target: []string{servicecatalog.StatusAvailable, StatusCreated}, + Refresh: ProvisioningArtifactStatus(conn, id, productID), + Timeout: ProvisioningArtifactReadyTimeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*servicecatalog.DescribeProvisioningArtifactOutput); ok { + return output, err + } + + return nil, err +} + +func ProvisioningArtifactDeleted(conn *servicecatalog.ServiceCatalog, id, productID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{servicecatalog.StatusCreating, servicecatalog.StatusAvailable, StatusCreated, StatusUnavailable}, + Target: []string{StatusNotFound}, + Refresh: ProvisioningArtifactStatus(conn, id, productID), + Timeout: ProvisioningArtifactDeletedTimeout, + } + + _, err := stateConf.WaitForState() + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return err + } + + return nil +} + +func PrincipalPortfolioAssociationReady(conn *servicecatalog.ServiceCatalog, acceptLanguage, principalARN, portfolioID string) (*servicecatalog.Principal, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{StatusNotFound, StatusUnavailable}, + Target: []string{servicecatalog.StatusAvailable}, + Refresh: PrincipalPortfolioAssociationStatus(conn, acceptLanguage, principalARN, portfolioID), + Timeout: PrincipalPortfolioAssociationReadyTimeout, + NotFoundChecks: 5, + MinTimeout: 10 * time.Second, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*servicecatalog.Principal); ok { + return output, err + } + + return nil, err +} + +func PrincipalPortfolioAssociationDeleted(conn *servicecatalog.ServiceCatalog, acceptLanguage, principalARN, portfolioID string) error { + stateConf := &resource.StateChangeConf{ + Pending: []string{servicecatalog.StatusAvailable}, + Target: []string{StatusNotFound, StatusUnavailable}, + Refresh: PrincipalPortfolioAssociationStatus(conn, acceptLanguage, principalARN, portfolioID), + Timeout: PrincipalPortfolioAssociationDeleteTimeout, + NotFoundChecks: 1, + } + + _, err := stateConf.WaitForState() + + return err +} diff --git a/aws/internal/tfresource/errors.go b/aws/internal/tfresource/errors.go index baa733d20a5..703eb55aa7d 100644 --- a/aws/internal/tfresource/errors.go +++ b/aws/internal/tfresource/errors.go @@ -23,3 +23,15 @@ func TimedOut(err error) bool { timeoutErr, ok := err.(*resource.TimeoutError) // nolint:errorlint return ok && timeoutErr.LastError == nil } + +// SetLastError sets the LastError field on the error if supported. +func SetLastError(err, lastErr error) { + var te *resource.TimeoutError + var use *resource.UnexpectedStateError + + if ok := errors.As(err, &te); ok && te.LastError == nil { + te.LastError = lastErr + } else if ok := errors.As(err, &use); ok && use.LastError == nil { + use.LastError = lastErr + } +} diff --git a/aws/provider.go b/aws/provider.go index fdc121412f9..c80434e6706 100644 --- a/aws/provider.go +++ b/aws/provider.go @@ -218,6 +218,7 @@ func Provider() *schema.Provider { "aws_cloudfront_origin_request_policy": dataSourceAwsCloudFrontOriginRequestPolicy(), "aws_cloudhsm_v2_cluster": dataSourceCloudHsmV2Cluster(), "aws_cloudtrail_service_account": dataSourceAwsCloudTrailServiceAccount(), + "aws_cloudwatch_event_connection": dataSourceAwsCloudwatchEventConnection(), "aws_cloudwatch_event_source": dataSourceAwsCloudWatchEventSource(), "aws_cloudwatch_log_group": dataSourceAwsCloudwatchLogGroup(), "aws_codeartifact_authorization_token": dataSourceAwsCodeArtifactAuthorizationToken(), @@ -226,6 +227,7 @@ func Provider() *schema.Provider { "aws_codecommit_repository": dataSourceAwsCodeCommitRepository(), "aws_codestarconnections_connection": dataSourceAwsCodeStarConnectionsConnection(), "aws_cur_report_definition": dataSourceAwsCurReportDefinition(), + "aws_default_tags": dataSourceAwsDefaultTags(), "aws_db_cluster_snapshot": dataSourceAwsDbClusterSnapshot(), "aws_db_event_categories": dataSourceAwsDbEventCategories(), "aws_db_instance": dataSourceAwsDbInstance(), @@ -386,6 +388,7 @@ func Provider() *schema.Provider { "aws_secretsmanager_secret": dataSourceAwsSecretsManagerSecret(), "aws_secretsmanager_secret_rotation": dataSourceAwsSecretsManagerSecretRotation(), "aws_secretsmanager_secret_version": dataSourceAwsSecretsManagerSecretVersion(), + "aws_servicecatalog_constraint": dataSourceAwsServiceCatalogConstraint(), "aws_servicequotas_service": dataSourceAwsServiceQuotasService(), "aws_servicequotas_service_quota": dataSourceAwsServiceQuotasServiceQuota(), "aws_service_discovery_dns_namespace": dataSourceServiceDiscoveryDnsNamespace(), @@ -451,6 +454,11 @@ func Provider() *schema.Provider { "aws_ami_copy": resourceAwsAmiCopy(), "aws_ami_from_instance": resourceAwsAmiFromInstance(), "aws_ami_launch_permission": resourceAwsAmiLaunchPermission(), + "aws_amplify_app": resourceAwsAmplifyApp(), + "aws_amplify_backend_environment": resourceAwsAmplifyBackendEnvironment(), + "aws_amplify_branch": resourceAwsAmplifyBranch(), + "aws_amplify_domain_association": resourceAwsAmplifyDomainAssociation(), + "aws_amplify_webhook": resourceAwsAmplifyWebhook(), "aws_api_gateway_account": resourceAwsApiGatewayAccount(), "aws_api_gateway_api_key": resourceAwsApiGatewayApiKey(), "aws_api_gateway_authorizer": resourceAwsApiGatewayAuthorizer(), @@ -544,6 +552,8 @@ func Provider() *schema.Provider { "aws_cloudwatch_event_rule": resourceAwsCloudWatchEventRule(), "aws_cloudwatch_event_target": resourceAwsCloudWatchEventTarget(), "aws_cloudwatch_event_archive": resourceAwsCloudWatchEventArchive(), + "aws_cloudwatch_event_connection": resourceAwsCloudWatchEventConnection(), + "aws_cloudwatch_event_api_destination": resourceAwsCloudWatchEventApiDestination(), "aws_cloudwatch_log_destination": resourceAwsCloudWatchLogDestination(), "aws_cloudwatch_log_destination_policy": resourceAwsCloudWatchLogDestinationPolicy(), "aws_cloudwatch_log_group": resourceAwsCloudWatchLogGroup(), @@ -977,6 +987,9 @@ func Provider() *schema.Provider { "aws_sagemaker_notebook_instance_lifecycle_configuration": resourceAwsSagemakerNotebookInstanceLifeCycleConfiguration(), "aws_sagemaker_notebook_instance": resourceAwsSagemakerNotebookInstance(), "aws_sagemaker_user_profile": resourceAwsSagemakerUserProfile(), + "aws_schemas_discoverer": resourceAwsSchemasDiscoverer(), + "aws_schemas_registry": resourceAwsSchemasRegistry(), + "aws_schemas_schema": resourceAwsSchemasSchema(), "aws_secretsmanager_secret": resourceAwsSecretsManagerSecret(), "aws_secretsmanager_secret_policy": resourceAwsSecretsManagerSecretPolicy(), "aws_secretsmanager_secret_version": resourceAwsSecretsManagerSecretVersion(), @@ -1023,6 +1036,7 @@ func Provider() *schema.Provider { "aws_securityhub_organization_admin_account": resourceAwsSecurityHubOrganizationAdminAccount(), "aws_securityhub_product_subscription": resourceAwsSecurityHubProductSubscription(), "aws_securityhub_standards_subscription": resourceAwsSecurityHubStandardsSubscription(), + "aws_servicecatalog_budget_resource_association": resourceAwsServiceCatalogBudgetResourceAssociation(), "aws_servicecatalog_constraint": resourceAwsServiceCatalogConstraint(), "aws_servicecatalog_organizations_access": resourceAwsServiceCatalogOrganizationsAccess(), "aws_servicecatalog_portfolio": resourceAwsServiceCatalogPortfolio(), @@ -1030,7 +1044,10 @@ func Provider() *schema.Provider { "aws_servicecatalog_product": resourceAwsServiceCatalogProduct(), "aws_servicecatalog_service_action": resourceAwsServiceCatalogServiceAction(), "aws_servicecatalog_tag_option": resourceAwsServiceCatalogTagOption(), + "aws_servicecatalog_tag_option_resource_association": resourceAwsServiceCatalogTagOptionResourceAssociation(), + "aws_servicecatalog_principal_portfolio_association": resourceAwsServiceCatalogPrincipalPortfolioAssociation(), "aws_servicecatalog_product_portfolio_association": resourceAwsServiceCatalogProductPortfolioAssociation(), + "aws_servicecatalog_provisioning_artifact": resourceAwsServiceCatalogProvisioningArtifact(), "aws_service_discovery_http_namespace": resourceAwsServiceDiscoveryHttpNamespace(), "aws_service_discovery_private_dns_namespace": resourceAwsServiceDiscoveryPrivateDnsNamespace(), "aws_service_discovery_public_dns_namespace": resourceAwsServiceDiscoveryPublicDnsNamespace(), @@ -1271,6 +1288,7 @@ func init() { "backup", "batch", "budgets", + "chime", "cloud9", "cloudformation", "cloudfront", @@ -1344,6 +1362,7 @@ func init() { "lexmodels", "licensemanager", "lightsail", + "location", "macie", "macie2", "managedblockchain", @@ -1379,6 +1398,7 @@ func init() { "s3control", "s3outposts", "sagemaker", + "schemas", "sdb", "secretsmanager", "securityhub", diff --git a/aws/resource_aws_acmpca_certificate_authority.go b/aws/resource_aws_acmpca_certificate_authority.go index a569d394a81..c667f22507e 100644 --- a/aws/resource_aws_acmpca_certificate_authority.go +++ b/aws/resource_aws_acmpca_certificate_authority.go @@ -236,6 +236,12 @@ func resourceAwsAcmpcaCertificateAuthority() *schema.Resource { Optional: true, ValidateFunc: validation.StringLenBetween(0, 255), }, + "s3_object_acl": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(acmpca.S3ObjectAcl_Values(), false), + }, }, }, }, @@ -602,6 +608,9 @@ func expandAcmpcaCrlConfiguration(l []interface{}) *acmpca.CrlConfiguration { if v, ok := m["s3_bucket_name"]; ok && v.(string) != "" { config.S3BucketName = aws.String(v.(string)) } + if v, ok := m["s3_object_acl"]; ok && v.(string) != "" { + config.S3ObjectAcl = aws.String(v.(string)) + } return config } @@ -668,6 +677,7 @@ func flattenAcmpcaCrlConfiguration(config *acmpca.CrlConfiguration) []interface{ "enabled": aws.BoolValue(config.Enabled), "expiration_in_days": int(aws.Int64Value(config.ExpirationInDays)), "s3_bucket_name": aws.StringValue(config.S3BucketName), + "s3_object_acl": aws.StringValue(config.S3ObjectAcl), } return []interface{}{m} diff --git a/aws/resource_aws_acmpca_certificate_authority_test.go b/aws/resource_aws_acmpca_certificate_authority_test.go index f2543d6ce26..274be156278 100644 --- a/aws/resource_aws_acmpca_certificate_authority_test.go +++ b/aws/resource_aws_acmpca_certificate_authority_test.go @@ -403,6 +403,7 @@ func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfigurati resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_object_acl", "PUBLIC_READ"), ), }, // Test importing revocation configuration @@ -440,6 +441,56 @@ func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfigurati }) } +func TestAccAwsAcmpcaCertificateAuthority_RevocationConfiguration_CrlConfiguration_S3ObjectAcl(t *testing.T) { + var certificateAuthority acmpca.CertificateAuthority + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_acmpca_certificate_authority.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, acmpca.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAcmpcaCertificateAuthorityDestroy, + Steps: []resource.TestStep{ + // Test creating revocation configuration on resource creation + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_s3ObjectAcl(rName, "BUCKET_OWNER_FULL_CONTROL"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_object_acl", "BUCKET_OWNER_FULL_CONTROL"), + ), + }, + // Test importing revocation configuration + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "permanent_deletion_time_in_days", + }, + }, + // Test updating revocation configuration + { + Config: testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_s3ObjectAcl(rName, "PUBLIC_READ"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAcmpcaCertificateAuthorityExists(resourceName, &certificateAuthority), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.expiration_in_days", "1"), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_bucket_name", rName), + resource.TestCheckResourceAttr(resourceName, "revocation_configuration.0.crl_configuration.0.s3_object_acl", "PUBLIC_READ"), + ), + }, + }, + }) +} + func TestAccAwsAcmpcaCertificateAuthority_Tags(t *testing.T) { var certificateAuthority acmpca.CertificateAuthority resourceName := "aws_acmpca_certificate_authority.test" @@ -797,6 +848,36 @@ resource "aws_acmpca_certificate_authority" "test" { `, testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName), expirationInDays) } +func testAccAwsAcmpcaCertificateAuthorityConfig_RevocationConfiguration_CrlConfiguration_s3ObjectAcl(rName, s3ObjectAcl string) string { + return fmt.Sprintf(` +%s + +resource "aws_acmpca_certificate_authority" "test" { + permanent_deletion_time_in_days = 7 + + certificate_authority_configuration { + key_algorithm = "RSA_4096" + signing_algorithm = "SHA512WITHRSA" + + subject { + common_name = "terraformtesting.com" + } + } + + revocation_configuration { + crl_configuration { + enabled = true + expiration_in_days = 1 + s3_bucket_name = aws_s3_bucket.test.id + s3_object_acl = "%s" + } + } + + depends_on = [aws_s3_bucket_policy.test] +} +`, testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName), s3ObjectAcl) +} + func testAccAwsAcmpcaCertificateAuthorityConfig_S3Bucket(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { diff --git a/aws/resource_aws_amplify_app.go b/aws/resource_aws_amplify_app.go new file mode 100644 index 00000000000..b28d2ba0120 --- /dev/null +++ b/aws/resource_aws_amplify_app.go @@ -0,0 +1,818 @@ +package aws + +import ( + "context" + "fmt" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" + tfamplify "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsAmplifyApp() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAmplifyAppCreate, + Read: resourceAwsAmplifyAppRead, + Update: resourceAwsAmplifyAppUpdate, + Delete: resourceAwsAmplifyAppDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + CustomizeDiff: customdiff.Sequence( + SetTagsDiff, + customdiff.ForceNewIfChange("description", func(_ context.Context, old, new, meta interface{}) bool { + // Any existing value cannot be cleared. + return new.(string) == "" + }), + customdiff.ForceNewIfChange("iam_service_role_arn", func(_ context.Context, old, new, meta interface{}) bool { + // Any existing value cannot be cleared. + return new.(string) == "" + }), + ), + + Schema: map[string]*schema.Schema{ + "access_token": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "auto_branch_creation_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "basic_auth_credentials": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ValidateFunc: validation.StringLenBetween(1, 2000), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // These credentials are ignored if basic auth is not enabled. + if d.Get("auto_branch_creation_config.0.enable_basic_auth").(bool) { + return old == new + } + + return true + }, + }, + + "build_spec": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 25000), + }, + + "enable_auto_build": { + Type: schema.TypeBool, + Optional: true, + }, + + "enable_basic_auth": { + Type: schema.TypeBool, + Optional: true, + }, + + "enable_performance_mode": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, + + "enable_pull_request_preview": { + Type: schema.TypeBool, + Optional: true, + }, + + "environment_variables": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "framework": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + + "pull_request_environment_name": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + + "stage": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(amplify.Stage_Values(), false), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // API returns "NONE" by default. + if old == tfamplify.StageNone && new == "" { + return true + } + + return old == new + }, + }, + }, + }, + }, + + "auto_branch_creation_patterns": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // These patterns are ignored if branch auto-creation is not enabled. + if d.Get("enable_auto_branch_creation").(bool) { + return old == new + } + + return true + }, + }, + + "basic_auth_credentials": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ValidateFunc: validation.StringLenBetween(1, 2000), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // These credentials are ignored if basic auth is not enabled. + if d.Get("enable_basic_auth").(bool) { + return old == new + } + + return true + }, + }, + + "build_spec": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringLenBetween(1, 25000), + }, + + "custom_rule": { + Type: schema.TypeList, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "condition": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + + "source": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + + "status": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{ + "200", + "301", + "302", + "404", + "404-200", + }, false), + }, + + "target": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 2048), + }, + }, + }, + }, + + "default_domain": { + Type: schema.TypeString, + Computed: true, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + + "enable_auto_branch_creation": { + Type: schema.TypeBool, + Optional: true, + }, + + "enable_basic_auth": { + Type: schema.TypeBool, + Optional: true, + }, + + "enable_branch_auto_build": { + Type: schema.TypeBool, + Optional: true, + }, + + "enable_branch_auto_deletion": { + Type: schema.TypeBool, + Optional: true, + }, + + "environment_variables": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + + "iam_service_role_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateArn, + }, + + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + + "oauth_token": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + + "platform": { + Type: schema.TypeString, + Optional: true, + Default: amplify.PlatformWeb, + ValidateFunc: validation.StringInSlice(amplify.Platform_Values(), false), + }, + + "production_branch": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "branch_name": { + Type: schema.TypeString, + Computed: true, + }, + + "last_deploy_time": { + Type: schema.TypeString, + Computed: true, + }, + + "status": { + Type: schema.TypeString, + Computed: true, + }, + + "thumbnail_url": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + + "repository": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + + "tags": tagsSchema(), + "tags_all": tagsSchemaComputed(), + }, + } +} + +func resourceAwsAmplifyAppCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + + name := d.Get("name").(string) + + input := &lify.CreateAppInput{ + Name: aws.String(name), + } + + if v, ok := d.GetOk("access_token"); ok { + input.AccessToken = aws.String(v.(string)) + } + + if v, ok := d.GetOk("auto_branch_creation_config"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.AutoBranchCreationConfig = expandAmplifyAutoBranchCreationConfig(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("auto_branch_creation_patterns"); ok && v.(*schema.Set).Len() > 0 { + input.AutoBranchCreationPatterns = expandStringSet(v.(*schema.Set)) + } + + if v, ok := d.GetOk("basic_auth_credentials"); ok { + input.BasicAuthCredentials = aws.String(v.(string)) + } + + if v, ok := d.GetOk("build_spec"); ok { + input.BuildSpec = aws.String(v.(string)) + } + + if v, ok := d.GetOk("custom_rule"); ok && len(v.([]interface{})) > 0 { + input.CustomRules = expandAmplifyCustomRules(v.([]interface{})) + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + if v, ok := d.GetOk("enable_auto_branch_creation"); ok { + input.EnableAutoBranchCreation = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("enable_basic_auth"); ok { + input.EnableBasicAuth = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("enable_branch_auto_build"); ok { + input.EnableBranchAutoBuild = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("enable_branch_auto_deletion"); ok { + input.EnableBranchAutoDeletion = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("environment_variables"); ok && len(v.(map[string]interface{})) > 0 { + input.EnvironmentVariables = expandStringMap(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("iam_service_role_arn"); ok { + input.IamServiceRoleArn = aws.String(v.(string)) + } + + if v, ok := d.GetOk("oauth_token"); ok { + input.OauthToken = aws.String(v.(string)) + } + + if v, ok := d.GetOk("platform"); ok { + input.Platform = aws.String(v.(string)) + } + + if v, ok := d.GetOk("repository"); ok { + input.Repository = aws.String(v.(string)) + } + + if len(tags) > 0 { + input.Tags = tags.IgnoreAws().AmplifyTags() + } + + log.Printf("[DEBUG] Creating Amplify App: %s", input) + output, err := conn.CreateApp(input) + + if err != nil { + return fmt.Errorf("error creating Amplify App (%s): %w", name, err) + } + + d.SetId(aws.StringValue(output.App.AppId)) + + return resourceAwsAmplifyAppRead(d, meta) +} + +func resourceAwsAmplifyAppRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + app, err := finder.AppByID(conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Amplify App (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading Amplify App (%s): %w", d.Id(), err) + } + + d.Set("arn", app.AppArn) + if app.AutoBranchCreationConfig != nil { + if err := d.Set("auto_branch_creation_config", []interface{}{flattenAmplifyAutoBranchCreationConfig(app.AutoBranchCreationConfig)}); err != nil { + return fmt.Errorf("error setting auto_branch_creation_config: %w", err) + } + } else { + d.Set("auto_branch_creation_config", nil) + } + d.Set("auto_branch_creation_patterns", aws.StringValueSlice(app.AutoBranchCreationPatterns)) + d.Set("basic_auth_credentials", app.BasicAuthCredentials) + d.Set("build_spec", app.BuildSpec) + if err := d.Set("custom_rule", flattenAmplifyCustomRules(app.CustomRules)); err != nil { + return fmt.Errorf("error setting custom_rule: %w", err) + } + d.Set("default_domain", app.DefaultDomain) + d.Set("description", app.Description) + d.Set("enable_auto_branch_creation", app.EnableAutoBranchCreation) + d.Set("enable_basic_auth", app.EnableBasicAuth) + d.Set("enable_branch_auto_build", app.EnableBranchAutoBuild) + d.Set("enable_branch_auto_deletion", app.EnableBranchAutoDeletion) + d.Set("environment_variables", aws.StringValueMap(app.EnvironmentVariables)) + d.Set("iam_service_role_arn", app.IamServiceRoleArn) + d.Set("name", app.Name) + d.Set("platform", app.Platform) + if app.ProductionBranch != nil { + if err := d.Set("production_branch", []interface{}{flattenAmplifyProductionBranch(app.ProductionBranch)}); err != nil { + return fmt.Errorf("error setting production_branch: %w", err) + } + } else { + d.Set("production_branch", nil) + } + d.Set("repository", app.Repository) + + tags := keyvaluetags.AmplifyKeyValueTags(app.Tags).IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceAwsAmplifyAppUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + if d.HasChangesExcept("tags", "tags_all") { + input := &lify.UpdateAppInput{ + AppId: aws.String(d.Id()), + } + + if d.HasChange("access_token") { + input.AccessToken = aws.String(d.Get("access_token").(string)) + } + + if d.HasChange("auto_branch_creation_config") { + input.AutoBranchCreationConfig = expandAmplifyAutoBranchCreationConfig(d.Get("auto_branch_creation_config").([]interface{})[0].(map[string]interface{})) + + if d.HasChange("auto_branch_creation_config.0.environment_variables") { + if v := d.Get("auto_branch_creation_config.0.environment_variables").(map[string]interface{}); len(v) == 0 { + input.AutoBranchCreationConfig.EnvironmentVariables = aws.StringMap(map[string]string{"": ""}) + } + } + } + + if d.HasChange("auto_branch_creation_patterns") { + input.AutoBranchCreationPatterns = expandStringSet(d.Get("auto_branch_creation_patterns").(*schema.Set)) + } + + if d.HasChange("basic_auth_credentials") { + input.BasicAuthCredentials = aws.String(d.Get("basic_auth_credentials").(string)) + } + + if d.HasChange("build_spec") { + input.BuildSpec = aws.String(d.Get("build_spec").(string)) + } + + if d.HasChange("custom_rule") { + if v := d.Get("custom_rule").([]interface{}); len(v) > 0 { + input.CustomRules = expandAmplifyCustomRules(v) + } else { + input.CustomRules = []*amplify.CustomRule{} + } + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChange("enable_auto_branch_creation") { + input.EnableAutoBranchCreation = aws.Bool(d.Get("enable_auto_branch_creation").(bool)) + } + + if d.HasChange("enable_basic_auth") { + input.EnableBasicAuth = aws.Bool(d.Get("enable_basic_auth").(bool)) + } + + if d.HasChange("enable_branch_auto_build") { + input.EnableBranchAutoBuild = aws.Bool(d.Get("enable_branch_auto_build").(bool)) + } + + if d.HasChange("enable_branch_auto_deletion") { + input.EnableBranchAutoDeletion = aws.Bool(d.Get("enable_branch_auto_deletion").(bool)) + } + + if d.HasChange("environment_variables") { + if v := d.Get("environment_variables").(map[string]interface{}); len(v) > 0 { + input.EnvironmentVariables = expandStringMap(v) + } else { + input.EnvironmentVariables = aws.StringMap(map[string]string{"": ""}) + } + } + + if d.HasChange("iam_service_role_arn") { + input.IamServiceRoleArn = aws.String(d.Get("iam_service_role_arn").(string)) + } + + if d.HasChange("name") { + input.Name = aws.String(d.Get("name").(string)) + } + + if d.HasChange("oauth_token") { + input.OauthToken = aws.String(d.Get("oauth_token").(string)) + } + + if d.HasChange("platform") { + input.Platform = aws.String(d.Get("platform").(string)) + } + + if d.HasChange("repository") { + input.Repository = aws.String(d.Get("repository").(string)) + } + + _, err := conn.UpdateApp(input) + + if err != nil { + return fmt.Errorf("error updating Amplify App (%s): %w", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.AmplifyUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } + } + + return resourceAwsAmplifyAppRead(d, meta) +} + +func resourceAwsAmplifyAppDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + log.Printf("[DEBUG] Deleting Amplify App (%s)", d.Id()) + _, err := conn.DeleteApp(&lify.DeleteAppInput{ + AppId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Amplify App (%s): %w", d.Id(), err) + } + + return nil +} + +func expandAmplifyAutoBranchCreationConfig(tfMap map[string]interface{}) *amplify.AutoBranchCreationConfig { + if tfMap == nil { + return nil + } + + apiObject := &lify.AutoBranchCreationConfig{} + + if v, ok := tfMap["basic_auth_credentials"].(string); ok && v != "" { + apiObject.BasicAuthCredentials = aws.String(v) + } + + if v, ok := tfMap["build_spec"].(string); ok && v != "" { + apiObject.BuildSpec = aws.String(v) + } + + if v, ok := tfMap["enable_auto_build"].(bool); ok { + apiObject.EnableAutoBuild = aws.Bool(v) + } + + if v, ok := tfMap["enable_basic_auth"].(bool); ok { + apiObject.EnableBasicAuth = aws.Bool(v) + } + + if v, ok := tfMap["enable_performance_mode"].(bool); ok { + apiObject.EnablePerformanceMode = aws.Bool(v) + } + + if v, ok := tfMap["enable_pull_request_preview"].(bool); ok { + apiObject.EnablePullRequestPreview = aws.Bool(v) + } + + if v, ok := tfMap["environment_variables"].(map[string]interface{}); ok && len(v) > 0 { + apiObject.EnvironmentVariables = expandStringMap(v) + } + + if v, ok := tfMap["framework"].(string); ok && v != "" { + apiObject.Framework = aws.String(v) + } + + if v, ok := tfMap["pull_request_environment_name"].(string); ok && v != "" { + apiObject.PullRequestEnvironmentName = aws.String(v) + } + + if v, ok := tfMap["stage"].(string); ok && v != "" && v != tfamplify.StageNone { + apiObject.Stage = aws.String(v) + } + + return apiObject +} + +func flattenAmplifyAutoBranchCreationConfig(apiObject *amplify.AutoBranchCreationConfig) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BasicAuthCredentials; v != nil { + tfMap["basic_auth_credentials"] = aws.StringValue(v) + } + + if v := apiObject.BuildSpec; v != nil { + tfMap["build_spec"] = aws.StringValue(v) + } + + if v := apiObject.EnableAutoBuild; v != nil { + tfMap["enable_auto_build"] = aws.BoolValue(v) + } + + if v := apiObject.EnableBasicAuth; v != nil { + tfMap["enable_basic_auth"] = aws.BoolValue(v) + } + + if v := apiObject.EnablePerformanceMode; v != nil { + tfMap["enable_performance_mode"] = aws.BoolValue(v) + } + + if v := apiObject.EnablePullRequestPreview; v != nil { + tfMap["enable_pull_request_preview"] = aws.BoolValue(v) + } + + if v := apiObject.EnvironmentVariables; v != nil { + tfMap["environment_variables"] = aws.StringValueMap(v) + } + + if v := apiObject.Framework; v != nil { + tfMap["framework"] = aws.StringValue(v) + } + + if v := apiObject.PullRequestEnvironmentName; v != nil { + tfMap["pull_request_environment_name"] = aws.StringValue(v) + } + + if v := apiObject.Stage; v != nil { + tfMap["stage"] = aws.StringValue(v) + } + + return tfMap +} + +func expandAmplifyCustomRule(tfMap map[string]interface{}) *amplify.CustomRule { + if tfMap == nil { + return nil + } + + apiObject := &lify.CustomRule{} + + if v, ok := tfMap["condition"].(string); ok && v != "" { + apiObject.Condition = aws.String(v) + } + + if v, ok := tfMap["source"].(string); ok && v != "" { + apiObject.Source = aws.String(v) + } + + if v, ok := tfMap["status"].(string); ok && v != "" { + apiObject.Status = aws.String(v) + } + + if v, ok := tfMap["target"].(string); ok && v != "" { + apiObject.Target = aws.String(v) + } + + return apiObject +} + +func expandAmplifyCustomRules(tfList []interface{}) []*amplify.CustomRule { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*amplify.CustomRule + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandAmplifyCustomRule(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func flattenAmplifyCustomRule(apiObject *amplify.CustomRule) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Condition; v != nil { + tfMap["condition"] = aws.StringValue(v) + } + + if v := apiObject.Source; v != nil { + tfMap["source"] = aws.StringValue(v) + } + + if v := apiObject.Status; v != nil { + tfMap["status"] = aws.StringValue(v) + } + + if v := apiObject.Target; v != nil { + tfMap["target"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenAmplifyCustomRules(apiObjects []*amplify.CustomRule) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenAmplifyCustomRule(apiObject)) + } + + return tfList +} + +func flattenAmplifyProductionBranch(apiObject *amplify.ProductionBranch) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BranchName; v != nil { + tfMap["branch_name"] = aws.StringValue(v) + } + + if v := apiObject.LastDeployTime; v != nil { + tfMap["last_deploy_time"] = aws.TimeValue(v).Format(time.RFC3339) + } + + if v := apiObject.Status; v != nil { + tfMap["status"] = aws.StringValue(v) + } + + if v := apiObject.ThumbnailUrl; v != nil { + tfMap["thumbnail_url"] = aws.StringValue(v) + } + + return tfMap +} diff --git a/aws/resource_aws_amplify_app_test.go b/aws/resource_aws_amplify_app_test.go new file mode 100644 index 00000000000..94b49a37ee8 --- /dev/null +++ b/aws/resource_aws_amplify_app_test.go @@ -0,0 +1,970 @@ +package aws + +import ( + "encoding/base64" + "fmt" + "log" + "os" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/lister" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func init() { + resource.AddTestSweepers("aws_amplify_app", &resource.Sweeper{ + Name: "aws_amplify_app", + F: testSweepAmplifyApps, + }) +} + +func testSweepAmplifyApps(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*AWSClient).amplifyconn + input := &lify.ListAppsInput{} + var sweeperErrs *multierror.Error + + err = lister.ListAppsPages(conn, input, func(page *amplify.ListAppsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, app := range page.Apps { + r := resourceAwsAmplifyApp() + d := r.Data(nil) + d.SetId(aws.StringValue(app.AppId)) + err = r.Delete(d, client) + + if err != nil { + log.Printf("[ERROR] %s", err) + sweeperErrs = multierror.Append(sweeperErrs, err) + continue + } + } + + return !lastPage + }) + + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping Amplify Apps sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Amplify Apps: %w", err)) + } + + return sweeperErrs.ErrorOrNil() +} + +func testAccAWSAmplifyApp_basic(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckNoResourceAttr(resourceName, "access_token"), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+`)), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.#", "0"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.#", "0"), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "build_spec", ""), + resource.TestCheckResourceAttr(resourceName, "custom_rule.#", "0"), + resource.TestMatchResourceAttr(resourceName, "default_domain", regexp.MustCompile(`\.amplifyapp\.com$`)), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "enable_auto_branch_creation", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_branch_auto_build", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_branch_auto_deletion", "false"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "0"), + resource.TestCheckResourceAttr(resourceName, "iam_service_role_arn", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckNoResourceAttr(resourceName, "oauth_token"), + resource.TestCheckResourceAttr(resourceName, "platform", "WEB"), + resource.TestCheckResourceAttr(resourceName, "production_branch.#", "0"), + resource.TestCheckResourceAttr(resourceName, "repository", ""), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccAWSAmplifyApp_disappears(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + testAccCheckResourceDisappears(testAccProvider, resourceAwsAmplifyApp(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccAWSAmplifyApp_Tags(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_AutoBranchCreationConfig(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + credentials := base64.StdEncoding.EncodeToString([]byte("username1:password1")) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigAutoBranchCreationConfigNoAutoBranchCreationConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.build_spec", ""), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_auto_build", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_basic_auth", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_performance_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_pull_request_preview", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.environment_variables.%", "0"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.framework", ""), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.pull_request_environment_name", ""), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.stage", "NONE"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.#", "2"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.0", "*"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.1", "*/**"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_branch_creation", "true"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigAutoBranchCreationConfigAutoBranchCreationConfig(rName, credentials), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.basic_auth_credentials", credentials), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.build_spec", "version: 0.1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_auto_build", "true"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_basic_auth", "true"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_performance_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_pull_request_preview", "true"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.environment_variables.%", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.environment_variables.ENVVAR1", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.framework", "React"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.pull_request_environment_name", "test1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.stage", "DEVELOPMENT"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.0", "feature/*"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_branch_creation", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigAutoBranchCreationConfigAutoBranchCreationConfigUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.#", "1"), + // Clearing basic_auth_credentials not reflected in API. + // resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.build_spec", "version: 0.2"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_auto_build", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_basic_auth", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_performance_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.enable_pull_request_preview", "false"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.environment_variables.%", "0"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.framework", "React"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.pull_request_environment_name", "test2"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.0.stage", "EXPERIMENTAL"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.0", "feature/*"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_branch_creation", "true"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + // No change is reflected in API. + // resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_config.#", "0"), + // resource.TestCheckResourceAttr(resourceName, "auto_branch_creation_patterns.#", "0"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_branch_creation", "false"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_BasicAuthCredentials(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + credentials1 := base64.StdEncoding.EncodeToString([]byte("username1:password1")) + credentials2 := base64.StdEncoding.EncodeToString([]byte("username2:password2")) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigBasicAuthCredentials(rName, credentials1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", credentials1), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigBasicAuthCredentials(rName, credentials2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", credentials2), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "true"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + // Clearing basic_auth_credentials not reflected in API. + // resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "false"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_BuildSpec(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigBuildSpec(rName, "version: 0.1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "build_spec", "version: 0.1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigBuildSpec(rName, "version: 0.2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "build_spec", "version: 0.2"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + // build_spec is Computed. + resource.TestCheckResourceAttr(resourceName, "build_spec", "version: 0.2"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_CustomRules(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigCustomRules(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "custom_rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.source", "/<*>"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.status", "404"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.target", "/index.html"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigCustomRulesUpdated(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "custom_rule.#", "2"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.condition", ""), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.source", "/documents"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.status", "302"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.0.target", "/documents/us"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.1.source", "/<*>"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.1.status", "200"), + resource.TestCheckResourceAttr(resourceName, "custom_rule.1.target", "/index.html"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "custom_rule.#", "0"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_Description(t *testing.T) { + var app1, app2, app3 amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigDescription(rName, "description 1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app1), + resource.TestCheckResourceAttr(resourceName, "description", "description 1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigDescription(rName, "description 2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app2), + testAccCheckAWSAmplifyAppNotRecreated(&app1, &app2), + resource.TestCheckResourceAttr(resourceName, "description", "description 2"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app3), + testAccCheckAWSAmplifyAppRecreated(&app2, &app3), + resource.TestCheckResourceAttr(resourceName, "description", ""), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_EnvironmentVariables(t *testing.T) { + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigEnvironmentVariables(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "1"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR1", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigEnvironmentVariablesUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "2"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR1", "2"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR2", "2"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "0"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_IamServiceRole(t *testing.T) { + var app1, app2, app3 amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + iamRole1ResourceName := "aws_iam_role.test1" + iamRole2ResourceName := "aws_iam_role.test2" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigIAMServiceRoleArn(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app1), + resource.TestCheckResourceAttrPair(resourceName, "iam_service_role_arn", iamRole1ResourceName, "arn")), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigIAMServiceRoleArnUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app2), + testAccCheckAWSAmplifyAppNotRecreated(&app1, &app2), + resource.TestCheckResourceAttrPair(resourceName, "iam_service_role_arn", iamRole2ResourceName, "arn"), + ), + }, + { + Config: testAccAWSAmplifyAppConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app3), + testAccCheckAWSAmplifyAppRecreated(&app2, &app3), + resource.TestCheckResourceAttr(resourceName, "iam_service_role_arn", ""), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_Name(t *testing.T) { + var app amplify.App + rName1 := acctest.RandomWithPrefix("tf-acc-test") + rName2 := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigName(rName1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "name", rName1), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyAppConfigName(rName2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "name", rName2), + ), + }, + }, + }) +} + +func testAccAWSAmplifyApp_Repository(t *testing.T) { + key := "AMPLIFY_GITHUB_ACCESS_TOKEN" + accessToken := os.Getenv(key) + if accessToken == "" { + t.Skipf("Environment variable %s is not set", key) + } + + key = "AMPLIFY_GITHUB_REPOSITORY" + repository := os.Getenv(key) + if repository == "" { + t.Skipf("Environment variable %s is not set", key) + } + + var app amplify.App + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_app.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyAppDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyAppConfigRepository(rName, repository, accessToken), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyAppExists(resourceName, &app), + resource.TestCheckResourceAttr(resourceName, "access_token", accessToken), + resource.TestCheckResourceAttr(resourceName, "repository", repository), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + // access_token is ignored because AWS does not store access_token and oauth_token + // See https://docs.aws.amazon.com/sdk-for-go/api/service/amplify/#CreateAppInput + ImportStateVerifyIgnore: []string{"access_token"}, + }, + }, + }) +} + +func testAccCheckAWSAmplifyAppExists(n string, v *amplify.App) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Amplify App ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + output, err := finder.AppByID(conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccCheckAWSAmplifyAppDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_amplify_app" { + continue + } + + _, err := finder.AppByID(conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Amplify App %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccPreCheckAWSAmplify(t *testing.T) { + if testAccGetPartition() == "aws-us-gov" { + t.Skip("AWS Amplify is not supported in GovCloud partition") + } +} + +func testAccCheckAWSAmplifyAppNotRecreated(before, after *amplify.App) resource.TestCheckFunc { + return func(s *terraform.State) error { + if before, after := aws.StringValue(before.AppId), aws.StringValue(after.AppId); before != after { + return fmt.Errorf("Amplify App (%s/%s) recreated", before, after) + } + + return nil + } +} + +func testAccCheckAWSAmplifyAppRecreated(before, after *amplify.App) resource.TestCheckFunc { + return func(s *terraform.State) error { + if before, after := aws.StringValue(before.AppId), aws.StringValue(after.AppId); before == after { + return fmt.Errorf("Amplify App (%s) not recreated", before) + } + + return nil + } +} + +func testAccAWSAmplifyAppConfigName(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} +`, rName) +} + +func testAccAWSAmplifyAppConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccAWSAmplifyAppConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} + +func testAccAWSAmplifyAppConfigAutoBranchCreationConfigNoAutoBranchCreationConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + enable_auto_branch_creation = true + + auto_branch_creation_patterns = [ + "*", + "*/**", + ] +} +`, rName) +} + +func testAccAWSAmplifyAppConfigAutoBranchCreationConfigAutoBranchCreationConfig(rName, basicAuthCredentials string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + enable_auto_branch_creation = true + + auto_branch_creation_patterns = [ + "feature/*", + ] + + auto_branch_creation_config { + build_spec = "version: 0.1" + framework = "React" + stage = "DEVELOPMENT" + + enable_basic_auth = true + basic_auth_credentials = %[2]q + + enable_auto_build = true + enable_pull_request_preview = true + pull_request_environment_name = "test1" + + environment_variables = { + ENVVAR1 = "1" + } + } +} + +`, rName, basicAuthCredentials) +} + +func testAccAWSAmplifyAppConfigAutoBranchCreationConfigAutoBranchCreationConfigUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + enable_auto_branch_creation = true + + auto_branch_creation_patterns = [ + "feature/*", + ] + + auto_branch_creation_config { + build_spec = "version: 0.2" + framework = "React" + stage = "EXPERIMENTAL" + + enable_basic_auth = false + + enable_auto_build = false + enable_pull_request_preview = false + + pull_request_environment_name = "test2" + } +} +`, rName) +} + +func testAccAWSAmplifyAppConfigBasicAuthCredentials(rName, basicAuthCredentials string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + basic_auth_credentials = %[2]q + enable_basic_auth = true +} +`, rName, basicAuthCredentials) +} + +func testAccAWSAmplifyAppConfigBuildSpec(rName, buildSpec string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + build_spec = %[2]q +} +`, rName, buildSpec) +} + +func testAccAWSAmplifyAppConfigCustomRules(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + custom_rule { + source = "/<*>" + status = "404" + target = "/index.html" + } +} +`, rName) +} + +func testAccAWSAmplifyAppConfigCustomRulesUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + custom_rule { + condition = "" + source = "/documents" + status = "302" + target = "/documents/us" + } + + custom_rule { + source = "/<*>" + status = "200" + target = "/index.html" + } +} +`, rName) +} + +func testAccAWSAmplifyAppConfigDescription(rName, description string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + description = %[2]q +} +`, rName, description) +} + +func testAccAWSAmplifyAppConfigEnvironmentVariables(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + environment_variables = { + ENVVAR1 = "1" + } +} +`, rName) +} + +func testAccAWSAmplifyAppConfigEnvironmentVariablesUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q + + environment_variables = { + ENVVAR1 = "2", + ENVVAR2 = "2" + } +} +`, rName) +} + +func testAccAWSAmplifyAppConfigIAMServiceRoleBase(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test1" { + name = "%[1]s-1" + + assume_role_policy = < 0 { + input.EnvironmentVariables = expandStringMap(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("framework"); ok { + input.Framework = aws.String(v.(string)) + } + + if v, ok := d.GetOk("pull_request_environment_name"); ok { + input.PullRequestEnvironmentName = aws.String(v.(string)) + } + + if v, ok := d.GetOk("stage"); ok { + input.Stage = aws.String(v.(string)) + } + + if v, ok := d.GetOk("ttl"); ok { + input.Ttl = aws.String(v.(string)) + } + + if len(tags) > 0 { + input.Tags = tags.IgnoreAws().AmplifyTags() + } + + log.Printf("[DEBUG] Creating Amplify Branch: %s", input) + _, err := conn.CreateBranch(input) + + if err != nil { + return fmt.Errorf("error creating Amplify Branch (%s): %w", id, err) + } + + d.SetId(id) + + return resourceAwsAmplifyBranchRead(d, meta) +} + +func resourceAwsAmplifyBranchRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + appID, branchName, err := tfamplify.BranchParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Branch ID: %w", err) + } + + branch, err := finder.BranchByAppIDAndBranchName(conn, appID, branchName) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Amplify Branch (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading Amplify Branch (%s): %w", d.Id(), err) + } + + d.Set("app_id", appID) + d.Set("arn", branch.BranchArn) + d.Set("associated_resources", aws.StringValueSlice(branch.AssociatedResources)) + d.Set("backend_environment_arn", branch.BackendEnvironmentArn) + d.Set("basic_auth_credentials", branch.BasicAuthCredentials) + d.Set("branch_name", branch.BranchName) + d.Set("custom_domains", aws.StringValueSlice(branch.CustomDomains)) + d.Set("description", branch.Description) + d.Set("destination_branch", branch.DestinationBranch) + d.Set("display_name", branch.DisplayName) + d.Set("enable_auto_build", branch.EnableAutoBuild) + d.Set("enable_basic_auth", branch.EnableBasicAuth) + d.Set("enable_notification", branch.EnableNotification) + d.Set("enable_performance_mode", branch.EnablePerformanceMode) + d.Set("enable_pull_request_preview", branch.EnablePullRequestPreview) + d.Set("environment_variables", aws.StringValueMap(branch.EnvironmentVariables)) + d.Set("framework", branch.Framework) + d.Set("pull_request_environment_name", branch.PullRequestEnvironmentName) + d.Set("source_branch", branch.SourceBranch) + d.Set("stage", branch.Stage) + d.Set("ttl", branch.Ttl) + + tags := keyvaluetags.AmplifyKeyValueTags(branch.Tags).IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceAwsAmplifyBranchUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + if d.HasChangesExcept("tags", "tags_all") { + appID, branchName, err := tfamplify.BranchParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Branch ID: %w", err) + } + + input := &lify.UpdateBranchInput{ + AppId: aws.String(appID), + BranchName: aws.String(branchName), + } + + if d.HasChange("backend_environment_arn") { + input.BackendEnvironmentArn = aws.String(d.Get("backend_environment_arn").(string)) + } + + if d.HasChange("basic_auth_credentials") { + input.BasicAuthCredentials = aws.String(d.Get("basic_auth_credentials").(string)) + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + if d.HasChange("display_name") { + input.DisplayName = aws.String(d.Get("display_name").(string)) + } + + if d.HasChange("enable_auto_build") { + input.EnableAutoBuild = aws.Bool(d.Get("enable_auto_build").(bool)) + } + + if d.HasChange("enable_basic_auth") { + input.EnableBasicAuth = aws.Bool(d.Get("enable_basic_auth").(bool)) + } + + if d.HasChange("enable_notification") { + input.EnableNotification = aws.Bool(d.Get("enable_notification").(bool)) + } + + if d.HasChange("enable_performance_mode") { + input.EnablePullRequestPreview = aws.Bool(d.Get("enable_performance_mode").(bool)) + } + + if d.HasChange("enable_pull_request_preview") { + input.EnablePullRequestPreview = aws.Bool(d.Get("enable_pull_request_preview").(bool)) + } + + if d.HasChange("environment_variables") { + if v := d.Get("environment_variables").(map[string]interface{}); len(v) > 0 { + input.EnvironmentVariables = expandStringMap(v) + } else { + input.EnvironmentVariables = aws.StringMap(map[string]string{"": ""}) + } + } + + if d.HasChange("framework") { + input.Framework = aws.String(d.Get("framework").(string)) + } + + if d.HasChange("pull_request_environment_name") { + input.PullRequestEnvironmentName = aws.String(d.Get("pull_request_environment_name").(string)) + } + + if d.HasChange("stage") { + input.Stage = aws.String(d.Get("stage").(string)) + } + + if d.HasChange("ttl") { + input.Ttl = aws.String(d.Get("ttl").(string)) + } + + _, err = conn.UpdateBranch(input) + + if err != nil { + return fmt.Errorf("error updating Amplify Branch (%s): %w", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.AmplifyUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } + } + + return resourceAwsAmplifyBranchRead(d, meta) +} + +func resourceAwsAmplifyBranchDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + appID, branchName, err := tfamplify.BranchParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Branch ID: %w", err) + } + + log.Printf("[DEBUG] Deleting Amplify Branch: %s", d.Id()) + _, err = conn.DeleteBranch(&lify.DeleteBranchInput{ + AppId: aws.String(appID), + BranchName: aws.String(branchName), + }) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Amplify Branch (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_amplify_branch_test.go b/aws/resource_aws_amplify_branch_test.go new file mode 100644 index 00000000000..3ba7103ddfd --- /dev/null +++ b/aws/resource_aws_amplify_branch_test.go @@ -0,0 +1,510 @@ +package aws + +import ( + "encoding/base64" + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfamplify "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func testAccAWSAmplifyBranch_basic(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_branch.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigName(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+/branches/.+`)), + resource.TestCheckResourceAttr(resourceName, "associated_resources.#", "0"), + resource.TestCheckResourceAttr(resourceName, "backend_environment_arn", ""), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "branch_name", rName), + resource.TestCheckResourceAttr(resourceName, "custom_domains.#", "0"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "destination_branch", ""), + resource.TestCheckResourceAttr(resourceName, "display_name", rName), + resource.TestCheckResourceAttr(resourceName, "enable_auto_build", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_notification", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_performance_mode", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_pull_request_preview", "false"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "0"), + resource.TestCheckResourceAttr(resourceName, "framework", ""), + resource.TestCheckResourceAttr(resourceName, "pull_request_environment_name", ""), + resource.TestCheckResourceAttr(resourceName, "source_branch", ""), + resource.TestCheckResourceAttr(resourceName, "stage", "NONE"), + resource.TestCheckResourceAttr(resourceName, "ttl", "5"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccAWSAmplifyBranch_disappears(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_branch.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + testAccCheckResourceDisappears(testAccProvider, resourceAwsAmplifyBranch(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccAWSAmplifyBranch_Tags(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_branch.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyBranchConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSAmplifyBranchConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyBranch_BasicAuthCredentials(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_branch.test" + + credentials1 := base64.StdEncoding.EncodeToString([]byte("username1:password1")) + credentials2 := base64.StdEncoding.EncodeToString([]byte("username2:password2")) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigBasicAuthCredentials(rName, credentials1), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", credentials1), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyBranchConfigBasicAuthCredentials(rName, credentials2), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", credentials2), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "true"), + ), + }, + { + Config: testAccAWSAmplifyBranchConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + // Clearing basic_auth_credentials not reflected in API. + // resource.TestCheckResourceAttr(resourceName, "basic_auth_credentials", ""), + resource.TestCheckResourceAttr(resourceName, "enable_basic_auth", "false"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyBranch_EnvironmentVariables(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_branch.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigEnvironmentVariables(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "1"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR1", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyBranchConfigEnvironmentVariablesUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "2"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR1", "2"), + resource.TestCheckResourceAttr(resourceName, "environment_variables.ENVVAR2", "2"), + ), + }, + { + Config: testAccAWSAmplifyBranchConfigName(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttr(resourceName, "environment_variables.%", "0"), + ), + }, + }, + }) +} + +func testAccAWSAmplifyBranch_OptionalArguments(t *testing.T) { + var branch amplify.Branch + rName := acctest.RandomWithPrefix("tf-acc-test") + environmentName := acctest.RandStringFromCharSet(9, acctest.CharSetAlpha) + resourceName := "aws_amplify_branch.test" + backendEnvironment1ResourceName := "aws_amplify_backend_environment.test1" + backendEnvironment2ResourceName := "aws_amplify_backend_environment.test2" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyBranchDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyBranchConfigOptionalArguments(rName, environmentName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttrPair(resourceName, "backend_environment_arn", backendEnvironment1ResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "description", "testdescription1"), + resource.TestCheckResourceAttr(resourceName, "display_name", "testdisplayname1"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_build", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_notification", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_performance_mode", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_pull_request_preview", "false"), + resource.TestCheckResourceAttr(resourceName, "framework", "React"), + resource.TestCheckResourceAttr(resourceName, "pull_request_environment_name", "testpr1"), + resource.TestCheckResourceAttr(resourceName, "stage", "DEVELOPMENT"), + resource.TestCheckResourceAttr(resourceName, "ttl", "10"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyBranchConfigOptionalArgumentsUpdated(rName, environmentName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyBranchExists(resourceName, &branch), + resource.TestCheckResourceAttrPair(resourceName, "backend_environment_arn", backendEnvironment2ResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "description", "testdescription2"), + resource.TestCheckResourceAttr(resourceName, "display_name", "testdisplayname2"), + resource.TestCheckResourceAttr(resourceName, "enable_auto_build", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_notification", "false"), + resource.TestCheckResourceAttr(resourceName, "enable_performance_mode", "true"), + resource.TestCheckResourceAttr(resourceName, "enable_pull_request_preview", "true"), + resource.TestCheckResourceAttr(resourceName, "framework", "Angular"), + resource.TestCheckResourceAttr(resourceName, "pull_request_environment_name", "testpr2"), + resource.TestCheckResourceAttr(resourceName, "stage", "EXPERIMENTAL"), + resource.TestCheckResourceAttr(resourceName, "ttl", "15"), + ), + }, + }, + }) +} + +func testAccCheckAWSAmplifyBranchExists(resourceName string, v *amplify.Branch) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Amplify Branch ID is set") + } + + appID, branchName, err := tfamplify.BranchParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + branch, err := finder.BranchByAppIDAndBranchName(conn, appID, branchName) + + if err != nil { + return err + } + + *v = *branch + + return nil + } +} + +func testAccCheckAWSAmplifyBranchDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_amplify_branch" { + continue + } + + appID, branchName, err := tfamplify.BranchParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + _, err = finder.BranchByAppIDAndBranchName(conn, appID, branchName) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Amplify Branch %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccAWSAmplifyBranchConfigName(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q +} +`, rName) +} + +func testAccAWSAmplifyBranchConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccAWSAmplifyBranchConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} + +func testAccAWSAmplifyBranchConfigBasicAuthCredentials(rName, basicAuthCredentials string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + basic_auth_credentials = %[2]q + enable_basic_auth = true +} +`, rName, basicAuthCredentials) +} + +func testAccAWSAmplifyBranchConfigEnvironmentVariables(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + environment_variables = { + ENVVAR1 = "1" + } +} +`, rName) +} + +func testAccAWSAmplifyBranchConfigEnvironmentVariablesUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + environment_variables = { + ENVVAR1 = "2", + ENVVAR2 = "2" + } +} +`, rName) +} + +func testAccAWSAmplifyBranchConfigOptionalArguments(rName, environmentName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_backend_environment" "test1" { + app_id = aws_amplify_app.test.id + environment_name = "%[2]sa" +} + +resource "aws_amplify_backend_environment" "test2" { + app_id = aws_amplify_app.test.id + environment_name = "%[2]sb" +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + backend_environment_arn = aws_amplify_backend_environment.test1.arn + description = "testdescription1" + display_name = "testdisplayname1" + enable_auto_build = false + enable_notification = true + enable_performance_mode = true + enable_pull_request_preview = false + framework = "React" + pull_request_environment_name = "testpr1" + stage = "DEVELOPMENT" + ttl = "10" +} +`, rName, environmentName) +} + +func testAccAWSAmplifyBranchConfigOptionalArgumentsUpdated(rName, environmentName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_backend_environment" "test1" { + app_id = aws_amplify_app.test.id + environment_name = "%[2]sa" +} + +resource "aws_amplify_backend_environment" "test2" { + app_id = aws_amplify_app.test.id + environment_name = "%[2]sb" +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q + + backend_environment_arn = aws_amplify_backend_environment.test2.arn + description = "testdescription2" + display_name = "testdisplayname2" + enable_auto_build = true + enable_notification = false + enable_performance_mode = true + enable_pull_request_preview = true + framework = "Angular" + pull_request_environment_name = "testpr2" + stage = "EXPERIMENTAL" + ttl = "15" +} +`, rName, environmentName) +} diff --git a/aws/resource_aws_amplify_domain_association.go b/aws/resource_aws_amplify_domain_association.go new file mode 100644 index 00000000000..f1d7f2a86c6 --- /dev/null +++ b/aws/resource_aws_amplify_domain_association.go @@ -0,0 +1,305 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + tfamplify "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/waiter" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsAmplifyDomainAssociation() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAmplifyDomainAssociationCreate, + Read: resourceAwsAmplifyDomainAssociationRead, + Update: resourceAwsAmplifyDomainAssociationUpdate, + Delete: resourceAwsAmplifyDomainAssociationDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "app_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "certificate_verification_dns_record": { + Type: schema.TypeString, + Computed: true, + }, + + "domain_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + + "sub_domain": { + Type: schema.TypeSet, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "branch_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + "dns_record": { + Type: schema.TypeString, + Computed: true, + }, + "prefix": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(0, 255), + }, + "verified": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + + "wait_for_verification": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + }, + } +} + +func resourceAwsAmplifyDomainAssociationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + appID := d.Get("app_id").(string) + domainName := d.Get("domain_name").(string) + id := tfamplify.DomainAssociationCreateResourceID(appID, domainName) + + input := &lify.CreateDomainAssociationInput{ + AppId: aws.String(appID), + DomainName: aws.String(domainName), + SubDomainSettings: expandAmplifySubDomainSettings(d.Get("sub_domain").(*schema.Set).List()), + } + + log.Printf("[DEBUG] Creating Amplify Domain Association: %s", input) + _, err := conn.CreateDomainAssociation(input) + + if err != nil { + return fmt.Errorf("error creating Amplify Domain Association (%s): %w", id, err) + } + + d.SetId(id) + + if _, err := waiter.DomainAssociationCreated(conn, appID, domainName); err != nil { + return fmt.Errorf("error waiting for Amplify Domain Association (%s) to create: %w", d.Id(), err) + } + + if d.Get("wait_for_verification").(bool) { + if _, err := waiter.DomainAssociationVerified(conn, appID, domainName); err != nil { + return fmt.Errorf("error waiting for Amplify Domain Association (%s) to verify: %w", d.Id(), err) + } + } + + return resourceAwsAmplifyDomainAssociationRead(d, meta) +} + +func resourceAwsAmplifyDomainAssociationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + appID, domainName, err := tfamplify.DomainAssociationParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Domain Association ID: %w", err) + } + + domainAssociation, err := finder.DomainAssociationByAppIDAndDomainName(conn, appID, domainName) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Amplify Domain Association (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading Amplify Domain Association (%s): %w", d.Id(), err) + } + + d.Set("app_id", appID) + d.Set("arn", domainAssociation.DomainAssociationArn) + d.Set("certificate_verification_dns_record", domainAssociation.CertificateVerificationDNSRecord) + d.Set("domain_name", domainAssociation.DomainName) + if err := d.Set("sub_domain", flattenAmplifySubDomains(domainAssociation.SubDomains)); err != nil { + return fmt.Errorf("error setting sub_domain: %w", err) + } + + return nil +} + +func resourceAwsAmplifyDomainAssociationUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + appID, domainName, err := tfamplify.DomainAssociationParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Domain Association ID: %w", err) + } + + if d.HasChange("sub_domain") { + input := &lify.UpdateDomainAssociationInput{ + AppId: aws.String(appID), + DomainName: aws.String(domainName), + SubDomainSettings: expandAmplifySubDomainSettings(d.Get("sub_domain").(*schema.Set).List()), + } + + log.Printf("[DEBUG] Creating Amplify Domain Association: %s", input) + _, err := conn.UpdateDomainAssociation(input) + + if err != nil { + return fmt.Errorf("error updating Amplify Domain Association (%s): %w", d.Id(), err) + } + } + + if d.Get("wait_for_verification").(bool) { + if _, err := waiter.DomainAssociationVerified(conn, appID, domainName); err != nil { + return fmt.Errorf("error waiting for Amplify Domain Association (%s) to verify: %w", d.Id(), err) + } + } + + return resourceAwsAmplifyDomainAssociationRead(d, meta) +} + +func resourceAwsAmplifyDomainAssociationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + appID, domainName, err := tfamplify.DomainAssociationParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing Amplify Domain Association ID: %w", err) + } + + log.Printf("[DEBUG] Deleting Amplify Domain Association: %s", d.Id()) + _, err = conn.DeleteDomainAssociation(&lify.DeleteDomainAssociationInput{ + AppId: aws.String(appID), + DomainName: aws.String(domainName), + }) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Amplify Domain Association (%s): %w", d.Id(), err) + } + + return nil +} + +func expandAmplifySubDomainSetting(tfMap map[string]interface{}) *amplify.SubDomainSetting { + if tfMap == nil { + return nil + } + + apiObject := &lify.SubDomainSetting{} + + if v, ok := tfMap["branch_name"].(string); ok && v != "" { + apiObject.BranchName = aws.String(v) + } + + // Empty prefix is allowed. + if v, ok := tfMap["prefix"].(string); ok { + apiObject.Prefix = aws.String(v) + } + + return apiObject +} + +func expandAmplifySubDomainSettings(tfList []interface{}) []*amplify.SubDomainSetting { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*amplify.SubDomainSetting + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandAmplifySubDomainSetting(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func flattenAmplifySubDomain(apiObject *amplify.SubDomain) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.DnsRecord; v != nil { + tfMap["dns_record"] = aws.StringValue(v) + } + + if v := apiObject.SubDomainSetting; v != nil { + apiObject := v + + if v := apiObject.BranchName; v != nil { + tfMap["branch_name"] = aws.StringValue(v) + } + + if v := apiObject.Prefix; v != nil { + tfMap["prefix"] = aws.StringValue(v) + } + } + + if v := apiObject.Verified; v != nil { + tfMap["verified"] = aws.BoolValue(v) + } + + return tfMap +} + +func flattenAmplifySubDomains(apiObjects []*amplify.SubDomain) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenAmplifySubDomain(apiObject)) + } + + return tfList +} diff --git a/aws/resource_aws_amplify_domain_association_test.go b/aws/resource_aws_amplify_domain_association_test.go new file mode 100644 index 00000000000..2ffe7d1d0a2 --- /dev/null +++ b/aws/resource_aws_amplify_domain_association_test.go @@ -0,0 +1,266 @@ +package aws + +import ( + "fmt" + "os" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfamplify "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func testAccAWSAmplifyDomainAssociation_basic(t *testing.T) { + key := "AMPLIFY_DOMAIN_NAME" + domainName := os.Getenv(key) + if domainName == "" { + t.Skipf("Environment variable %s is not set", key) + } + + var domain amplify.DomainAssociation + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_domain_association.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyDomainAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyDomainAssociationConfig(rName, domainName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyDomainAssociationExists(resourceName, &domain), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+/domains/.+`)), + resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), + resource.TestCheckResourceAttr(resourceName, "sub_domain.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "sub_domain.*", map[string]string{ + "branch_name": rName, + "prefix": "", + }), + resource.TestCheckResourceAttr(resourceName, "wait_for_verification", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"wait_for_verification"}, + }, + }, + }) +} + +func testAccAWSAmplifyDomainAssociation_disappears(t *testing.T) { + key := "AMPLIFY_DOMAIN_NAME" + domainName := os.Getenv(key) + if domainName == "" { + t.Skipf("Environment variable %s is not set", key) + } + + var domain amplify.DomainAssociation + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_domain_association.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyDomainAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyDomainAssociationConfig(rName, domainName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyDomainAssociationExists(resourceName, &domain), + testAccCheckResourceDisappears(testAccProvider, resourceAwsAmplifyDomainAssociation(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccAWSAmplifyDomainAssociation_update(t *testing.T) { + key := "AMPLIFY_DOMAIN_NAME" + domainName := os.Getenv(key) + if domainName == "" { + t.Skipf("Environment variable %s is not set", key) + } + + var domain amplify.DomainAssociation + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_domain_association.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyDomainAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyDomainAssociationConfig(rName, domainName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyDomainAssociationExists(resourceName, &domain), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+/domains/.+`)), + resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), + resource.TestCheckResourceAttr(resourceName, "sub_domain.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "sub_domain.*", map[string]string{ + "branch_name": rName, + "prefix": "", + }), + resource.TestCheckResourceAttr(resourceName, "wait_for_verification", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"wait_for_verification"}, + }, + { + Config: testAccAWSAmplifyDomainAssociationConfigUpdated(rName, domainName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSAmplifyDomainAssociationExists(resourceName, &domain), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+/domains/.+`)), + resource.TestCheckResourceAttr(resourceName, "domain_name", domainName), + resource.TestCheckResourceAttr(resourceName, "sub_domain.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "sub_domain.*", map[string]string{ + "branch_name": rName, + "prefix": "", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "sub_domain.*", map[string]string{ + "branch_name": fmt.Sprintf("%s-2", rName), + "prefix": "www", + }), + resource.TestCheckResourceAttr(resourceName, "wait_for_verification", "true"), + ), + }, + }, + }) +} + +func testAccCheckAWSAmplifyDomainAssociationExists(resourceName string, v *amplify.DomainAssociation) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Amplify Domain Association ID is set") + } + + appID, domainName, err := tfamplify.DomainAssociationParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + domainAssociation, err := finder.DomainAssociationByAppIDAndDomainName(conn, appID, domainName) + + if err != nil { + return err + } + + *v = *domainAssociation + + return nil + } +} + +func testAccCheckAWSAmplifyDomainAssociationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_amplify_domain_association" { + continue + } + + appID, domainName, err := tfamplify.DomainAssociationParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + _, err = finder.DomainAssociationByAppIDAndDomainName(conn, appID, domainName) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Amplify Domain Association %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccAWSAmplifyDomainAssociationConfig(rName, domainName string, waitForVerification bool) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q +} + +resource "aws_amplify_domain_association" "test" { + app_id = aws_amplify_app.test.id + domain_name = %[2]q + + sub_domain { + branch_name = aws_amplify_branch.test.branch_name + prefix = "" + } + + wait_for_verification = %[3]t +} +`, rName, domainName, waitForVerification) +} + +func testAccAWSAmplifyDomainAssociationConfigUpdated(rName, domainName string, waitForVerification bool) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q +} + +resource "aws_amplify_branch" "test2" { + app_id = aws_amplify_app.test.id + branch_name = "%[1]s-2" +} + +resource "aws_amplify_domain_association" "test" { + app_id = aws_amplify_app.test.id + domain_name = %[2]q + + sub_domain { + branch_name = aws_amplify_branch.test.branch_name + prefix = "" + } + + sub_domain { + branch_name = aws_amplify_branch.test2.branch_name + prefix = "www" + } + + wait_for_verification = %[3]t +} +`, rName, domainName, waitForVerification) +} diff --git a/aws/resource_aws_amplify_test.go b/aws/resource_aws_amplify_test.go new file mode 100644 index 00000000000..d4b48560d04 --- /dev/null +++ b/aws/resource_aws_amplify_test.go @@ -0,0 +1,63 @@ +package aws + +import ( + "testing" + "time" +) + +// Serialize to limit API rate-limit exceeded errors. +func TestAccAWSAmplify_serial(t *testing.T) { + testCases := map[string]map[string]func(t *testing.T){ + "App": { + "basic": testAccAWSAmplifyApp_basic, + "disappears": testAccAWSAmplifyApp_disappears, + "Tags": testAccAWSAmplifyApp_Tags, + "AutoBranchCreationConfig": testAccAWSAmplifyApp_AutoBranchCreationConfig, + "BasicAuthCredentials": testAccAWSAmplifyApp_BasicAuthCredentials, + "BuildSpec": testAccAWSAmplifyApp_BuildSpec, + "CustomRules": testAccAWSAmplifyApp_CustomRules, + "Description": testAccAWSAmplifyApp_Description, + "EnvironmentVariables": testAccAWSAmplifyApp_EnvironmentVariables, + "IamServiceRole": testAccAWSAmplifyApp_IamServiceRole, + "Name": testAccAWSAmplifyApp_Name, + "Repository": testAccAWSAmplifyApp_Repository, + }, + "BackendEnvironment": { + "basic": testAccAWSAmplifyBackendEnvironment_basic, + "disappears": testAccAWSAmplifyBackendEnvironment_disappears, + "DeploymentArtifacts_StackName": testAccAWSAmplifyBackendEnvironment_DeploymentArtifacts_StackName, + }, + "Branch": { + "basic": testAccAWSAmplifyBranch_basic, + "disappears": testAccAWSAmplifyBranch_disappears, + "Tags": testAccAWSAmplifyBranch_Tags, + "BasicAuthCredentials": testAccAWSAmplifyBranch_BasicAuthCredentials, + "EnvironmentVariables": testAccAWSAmplifyBranch_EnvironmentVariables, + "OptionalArguments": testAccAWSAmplifyBranch_OptionalArguments, + }, + "DomainAssociation": { + "basic": testAccAWSAmplifyDomainAssociation_basic, + "disappears": testAccAWSAmplifyDomainAssociation_disappears, + "update": testAccAWSAmplifyDomainAssociation_update, + }, + "Webhook": { + "basic": testAccAWSAmplifyWebhook_basic, + "disappears": testAccAWSAmplifyWebhook_disappears, + "update": testAccAWSAmplifyWebhook_update, + }, + } + + for group, m := range testCases { + m := m + t.Run(group, func(t *testing.T) { + for name, tc := range m { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + // Explicitly sleep between tests. + time.Sleep(5 * time.Second) + }) + } + }) + } +} diff --git a/aws/resource_aws_amplify_webhook.go b/aws/resource_aws_amplify_webhook.go new file mode 100644 index 00000000000..fe66597b64d --- /dev/null +++ b/aws/resource_aws_amplify_webhook.go @@ -0,0 +1,164 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsAmplifyWebhook() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsAmplifyWebhookCreate, + Read: resourceAwsAmplifyWebhookRead, + Update: resourceAwsAmplifyWebhookUpdate, + Delete: resourceAwsAmplifyWebhookDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "app_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "branch_name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[0-9A-Za-z/_.-]{1,255}$`), "should be not be more than 255 letters, numbers, and the symbols /_.-"), + }, + + "description": { + Type: schema.TypeString, + Optional: true, + }, + + "url": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceAwsAmplifyWebhookCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + input := &lify.CreateWebhookInput{ + AppId: aws.String(d.Get("app_id").(string)), + BranchName: aws.String(d.Get("branch_name").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Creating Amplify Webhook: %s", input) + output, err := conn.CreateWebhook(input) + + if err != nil { + return fmt.Errorf("error creating Amplify Webhook: %w", err) + } + + d.SetId(aws.StringValue(output.Webhook.WebhookId)) + + return resourceAwsAmplifyWebhookRead(d, meta) +} + +func resourceAwsAmplifyWebhookRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + webhook, err := finder.WebhookByID(conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Amplify Webhook (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading Amplify Webhook (%s): %w", d.Id(), err) + } + + webhookArn := aws.StringValue(webhook.WebhookArn) + arn, err := arn.Parse(webhookArn) + + if err != nil { + return fmt.Errorf("error parsing %q: %w", webhookArn, err) + } + + // arn:${Partition}:amplify:${Region}:${Account}:apps/${AppId}/webhooks/${WebhookId} + parts := strings.Split(arn.Resource, "/") + + if len(parts) != 4 { + return fmt.Errorf("unexpected format for ARN resource (%s)", arn.Resource) + } + + d.Set("app_id", parts[1]) + d.Set("arn", webhookArn) + d.Set("branch_name", webhook.BranchName) + d.Set("description", webhook.Description) + d.Set("url", webhook.WebhookUrl) + + return nil +} + +func resourceAwsAmplifyWebhookUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + input := &lify.UpdateWebhookInput{ + WebhookId: aws.String(d.Id()), + } + + if d.HasChange("branch_name") { + input.BranchName = aws.String(d.Get("branch_name").(string)) + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + log.Printf("[DEBUG] Updating Amplify Webhook: %s", input) + _, err := conn.UpdateWebhook(input) + + if err != nil { + return fmt.Errorf("error updating Amplify Webhook (%s): %w", d.Id(), err) + } + + return resourceAwsAmplifyWebhookRead(d, meta) +} + +func resourceAwsAmplifyWebhookDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).amplifyconn + + log.Printf("[DEBUG] Deleting Amplify Webhook: %s", d.Id()) + _, err := conn.DeleteWebhook(&lify.DeleteWebhookInput{ + WebhookId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, amplify.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting Amplify Webhook (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_amplify_webhook_test.go b/aws/resource_aws_amplify_webhook_test.go new file mode 100644 index 00000000000..18beddc6fb4 --- /dev/null +++ b/aws/resource_aws_amplify_webhook_test.go @@ -0,0 +1,218 @@ +package aws + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/service/amplify" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/amplify/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func testAccAWSAmplifyWebhook_basic(t *testing.T) { + var webhook amplify.Webhook + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_webhook.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyWebhookDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyWebhookConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyWebhookExists(resourceName, &webhook), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "amplify", regexp.MustCompile(`apps/.+/webhooks/.+`)), + resource.TestCheckResourceAttr(resourceName, "branch_name", rName), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestMatchResourceAttr(resourceName, "url", regexp.MustCompile(fmt.Sprintf(`^https://webhooks.amplify.%s.%s/.+$`, testAccGetRegion(), testAccGetPartitionDNSSuffix()))), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccAWSAmplifyWebhook_disappears(t *testing.T) { + var webhook amplify.Webhook + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_webhook.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyWebhookDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyWebhookConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyWebhookExists(resourceName, &webhook), + testAccCheckResourceDisappears(testAccProvider, resourceAwsAmplifyWebhook(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccAWSAmplifyWebhook_update(t *testing.T) { + var webhook amplify.Webhook + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_amplify_webhook.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSAmplify(t) }, + ErrorCheck: testAccErrorCheck(t, amplify.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSAmplifyWebhookDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSAmplifyWebhookConfigDescription(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyWebhookExists(resourceName, &webhook), + resource.TestCheckResourceAttr(resourceName, "branch_name", fmt.Sprintf("%s-1", rName)), + resource.TestCheckResourceAttr(resourceName, "description", "testdescription1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSAmplifyWebhookConfigDescriptionUpdated(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSAmplifyWebhookExists(resourceName, &webhook), + resource.TestCheckResourceAttr(resourceName, "branch_name", fmt.Sprintf("%s-2", rName)), + resource.TestCheckResourceAttr(resourceName, "description", "testdescription2"), + ), + }, + }, + }) +} + +func testAccCheckAWSAmplifyWebhookExists(resourceName string, v *amplify.Webhook) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No Amplify Webhook ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + webhook, err := finder.WebhookByID(conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *webhook + + return nil + } +} + +func testAccCheckAWSAmplifyWebhookDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).amplifyconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_amplify_webhook" { + continue + } + + _, err := finder.WebhookByID(conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("Amplify Webhook %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccAWSAmplifyWebhookConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test" { + app_id = aws_amplify_app.test.id + branch_name = %[1]q +} + +resource "aws_amplify_webhook" "test" { + app_id = aws_amplify_app.test.id + branch_name = aws_amplify_branch.test.branch_name +} +`, rName) +} + +func testAccAWSAmplifyWebhookConfigDescription(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test1" { + app_id = aws_amplify_app.test.id + branch_name = "%[1]s-1" +} + +resource "aws_amplify_branch" "test2" { + app_id = aws_amplify_app.test.id + branch_name = "%[1]s-2" +} + +resource "aws_amplify_webhook" "test" { + app_id = aws_amplify_app.test.id + branch_name = aws_amplify_branch.test1.branch_name + description = "testdescription1" +} +`, rName) +} + +func testAccAWSAmplifyWebhookConfigDescriptionUpdated(rName string) string { + return fmt.Sprintf(` +resource "aws_amplify_app" "test" { + name = %[1]q +} + +resource "aws_amplify_branch" "test1" { + app_id = aws_amplify_app.test.id + branch_name = "%[1]s-1" +} + +resource "aws_amplify_branch" "test2" { + app_id = aws_amplify_app.test.id + branch_name = "%[1]s-2" +} + +resource "aws_amplify_webhook" "test" { + app_id = aws_amplify_app.test.id + branch_name = aws_amplify_branch.test2.branch_name + description = "testdescription2" +} +`, rName) +} diff --git a/aws/resource_aws_apprunner_service.go b/aws/resource_aws_apprunner_service.go index e953e03f08f..97ba80a7ea6 100644 --- a/aws/resource_aws_apprunner_service.go +++ b/aws/resource_aws_apprunner_service.go @@ -10,10 +10,13 @@ import ( "github.com/aws/aws-sdk-go/service/apprunner" "github.com/hashicorp/aws-sdk-go-base/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/apprunner/waiter" + iamwaiter "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/iam/waiter" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" ) func resourceAwsAppRunnerService() *schema.Resource { @@ -123,6 +126,10 @@ func resourceAwsAppRunnerService() *schema.Resource { Optional: true, Default: "1024", ValidateFunc: validation.StringMatch(regexp.MustCompile(`1024|2048|(1|2) vCPU`), ""), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // App Runner API always returns the amount in multiples of 1024 units + return (old == "1024" && new == "1 vCPU") || (old == "2048" && new == "2 vCPU") + }, }, "instance_role_arn": { Type: schema.TypeString, @@ -134,6 +141,10 @@ func resourceAwsAppRunnerService() *schema.Resource { Optional: true, Default: "2048", ValidateFunc: validation.StringMatch(regexp.MustCompile(`2048|3072|4096|(2|3|4) GB`), ""), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // App Runner API always returns the amount in MB + return (old == "2048" && new == "2 GB") || (old == "3072" && new == "3 GB") || (old == "4096" && new == "4 GB") + }, }, }, }, @@ -366,7 +377,26 @@ func resourceAwsAppRunnerServiceCreate(ctx context.Context, d *schema.ResourceDa input.InstanceConfiguration = expandAppRunnerServiceInstanceConfiguration(v.([]interface{})) } - output, err := conn.CreateServiceWithContext(ctx, input) + var output *apprunner.CreateServiceOutput + + err := resource.RetryContext(ctx, iamwaiter.PropagationTimeout, func() *resource.RetryError { + var err error + output, err = conn.CreateServiceWithContext(ctx, input) + + if tfawserr.ErrMessageContains(err, apprunner.ErrCodeInvalidRequestException, "Error in assuming instance role") { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + + if tfresource.TimedOut(err) { + output, err = conn.CreateServiceWithContext(ctx, input) + } if err != nil { return diag.FromErr(fmt.Errorf("error creating App Runner Service (%s): %w", serviceName, err)) @@ -680,6 +710,10 @@ func expandAppRunnerServiceAuthenticationConfiguration(l []interface{}) *apprunn result.AccessRoleArn = aws.String(v) } + if v, ok := tfMap["connection_arn"].(string); ok && v != "" { + result.ConnectionArn = aws.String(v) + } + return result } @@ -700,6 +734,14 @@ func expandAppRunnerServiceImageConfiguration(l []interface{}) *apprunner.ImageC result.Port = aws.String(v) } + if v, ok := tfMap["runtime_environment_variables"].(map[string]interface{}); ok && len(v) > 0 { + result.RuntimeEnvironmentVariables = expandStringMap(v) + } + + if v, ok := tfMap["start_command"].(string); ok && v != "" { + result.StartCommand = aws.String(v) + } + return result } @@ -808,6 +850,10 @@ func expandAppRunnerServiceCodeConfigurationValues(l []interface{}) *apprunner.C result.Runtime = aws.String(v) } + if v, ok := tfMap["runtime_environment_variables"].(map[string]interface{}); ok && len(v) > 0 { + result.RuntimeEnvironmentVariables = expandStringMap(v) + } + if v, ok := tfMap["start_command"].(string); ok && v != "" { result.StartCommand = aws.String(v) } diff --git a/aws/resource_aws_apprunner_service_test.go b/aws/resource_aws_apprunner_service_test.go index 434c3692655..7f3d0ce996e 100644 --- a/aws/resource_aws_apprunner_service_test.go +++ b/aws/resource_aws_apprunner_service_test.go @@ -93,6 +93,21 @@ func TestAccAwsAppRunnerService_ImageRepository_basic(t *testing.T) { testAccCheckAwsAppRunnerServiceExists(resourceName), resource.TestCheckResourceAttr(resourceName, "service_name", rName), testAccMatchResourceAttrRegionalARN(resourceName, "arn", "apprunner", regexp.MustCompile(fmt.Sprintf(`service/%s/.+`, rName))), + testAccMatchResourceAttrRegionalARN(resourceName, "auto_scaling_configuration_arn", "apprunner", regexp.MustCompile(`autoscalingconfiguration/DefaultConfiguration/1/.+`)), + resource.TestCheckResourceAttr(resourceName, "health_check_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "health_check_configuration.0.protocol", apprunner.HealthCheckProtocolTcp), + resource.TestCheckResourceAttr(resourceName, "health_check_configuration.0.path", "/"), + // Only check the following attribute values for health_check and instance configurations + // are set as their defaults differ in the API documentation and API itself + resource.TestCheckResourceAttrSet(resourceName, "health_check_configuration.0.interval"), + resource.TestCheckResourceAttrSet(resourceName, "health_check_configuration.0.timeout"), + resource.TestCheckResourceAttrSet(resourceName, "health_check_configuration.0.healthy_threshold"), + resource.TestCheckResourceAttrSet(resourceName, "health_check_configuration.0.unhealthy_threshold"), + resource.TestCheckResourceAttr(resourceName, "instance_configuration.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "instance_configuration.0.cpu"), + resource.TestCheckResourceAttrSet(resourceName, "instance_configuration.0.memory"), + resource.TestCheckResourceAttrSet(resourceName, "service_id"), + resource.TestCheckResourceAttrSet(resourceName, "service_url"), resource.TestCheckResourceAttr(resourceName, "source_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "source_configuration.0.auto_deployments_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.#", "1"), @@ -100,6 +115,8 @@ func TestAccAwsAppRunnerService_ImageRepository_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_configuration.0.port", "8000"), resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_identifier", "public.ecr.aws/jg/hello:latest"), resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_repository_type", apprunner.ImageRepositoryTypeEcrPublic), + resource.TestCheckResourceAttr(resourceName, "status", apprunner.ServiceStatusRunning), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { @@ -245,9 +262,9 @@ func TestAccAwsAppRunnerService_ImageRepository_InstanceConfiguration(t *testing Check: resource.ComposeTestCheckFunc( testAccCheckAwsAppRunnerServiceExists(resourceName), resource.TestCheckResourceAttr(resourceName, "instance_configuration.#", "1"), - resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.cpu", "1 vCPU"), + resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.cpu", "1024"), resource.TestCheckResourceAttrPair(resourceName, "instance_configuration.0.instance_role_arn", roleResourceName, "arn"), - resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.memory", "3 GB"), + resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.memory", "3072"), ), }, { @@ -260,7 +277,7 @@ func TestAccAwsAppRunnerService_ImageRepository_InstanceConfiguration(t *testing Check: resource.ComposeTestCheckFunc( testAccCheckAwsAppRunnerServiceExists(resourceName), resource.TestCheckResourceAttr(resourceName, "instance_configuration.#", "1"), - resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.cpu", "2 vCPU"), + resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.cpu", "2048"), resource.TestCheckResourceAttrPair(resourceName, "instance_configuration.0.instance_role_arn", roleResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "instance_configuration.0.memory", "4096"), ), @@ -274,13 +291,46 @@ func TestAccAwsAppRunnerService_ImageRepository_InstanceConfiguration(t *testing Config: testAccAppRunnerService_imageRepository(rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsAppRunnerServiceExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "instance_configuration.#", "0"), + resource.TestCheckResourceAttr(resourceName, "instance_configuration.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "instance_configuration.0.cpu"), + resource.TestCheckResourceAttrSet(resourceName, "instance_configuration.0.memory"), ), }, }, }) } +// Reference: https://github.com/hashicorp/terraform-provider-aws/issues/19469 +func TestAccAwsAppRunnerService_ImageRepository_RuntimeEnvironmentVars(t *testing.T) { + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_apprunner_service.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAppRunner(t) }, + ErrorCheck: testAccErrorCheck(t, apprunner.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsAppRunnerServiceDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAppRunnerService_imageRepository_runtimeEnvVars(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsAppRunnerServiceExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "source_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_configuration.0.runtime_environment_variables.%", "1"), + resource.TestCheckResourceAttr(resourceName, "source_configuration.0.image_repository.0.image_configuration.0.runtime_environment_variables.APP_NAME", rName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAwsAppRunnerService_disappears(t *testing.T) { rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_apprunner_service.test" @@ -443,6 +493,27 @@ resource "aws_apprunner_service" "test" { `, rName) } +func testAccAppRunnerService_imageRepository_runtimeEnvVars(rName string) string { + return fmt.Sprintf(` +resource "aws_apprunner_service" "test" { + service_name = %[1]q + source_configuration { + auto_deployments_enabled = false + image_repository { + image_configuration { + port = "8000" + runtime_environment_variables = { + APP_NAME = %[1]q + } + } + image_identifier = "public.ecr.aws/jg/hello:latest" + image_repository_type = "ECR_PUBLIC" + } + } +} +`, rName) +} + func testAccAppRunnerService_imageRepository_autoScalingConfiguration(rName string) string { return fmt.Sprintf(` resource "aws_apprunner_auto_scaling_configuration_version" "test" { @@ -560,48 +631,14 @@ resource "aws_iam_role" "test" { "Sid": "", "Effect": "Allow", "Principal": { - "Service": [ - "apprunner.${data.aws_partition.current.dns_suffix}" - ] + "Service": "tasks.apprunner.${data.aws_partition.current.dns_suffix}" }, - "Action": [ - "sts:AssumeRole" - ] + "Action": "sts:AssumeRole" } ] } EOF } - -resource "aws_iam_policy" "test" { - name = %[1]q - path = "/" - description = "App Runner PassRole Policy" - - policy = < 0 && v.([]interface{})[0] != nil { + input.Timeout = expandBatchJobTimeout(v.([]interface{})[0].(map[string]interface{})) } output, err := conn.RegisterJobDefinition(input) @@ -278,8 +278,12 @@ func resourceAwsBatchJobDefinitionRead(d *schema.ResourceData, meta interface{}) return fmt.Errorf("error setting tags_all: %w", err) } - if err := d.Set("timeout", flattenBatchJobTimeout(jobDefinition.Timeout)); err != nil { - return fmt.Errorf("error setting timeout: %w", err) + if jobDefinition.Timeout != nil { + if err := d.Set("timeout", []interface{}{flattenBatchJobTimeout(jobDefinition.Timeout)}); err != nil { + return fmt.Errorf("error setting timeout: %w", err) + } + } else { + d.Set("timeout", nil) } d.Set("revision", jobDefinition.Revision) @@ -488,23 +492,30 @@ func flattenBatchEvaluateOnExits(apiObjects []*batch.EvaluateOnExit) []interface return tfList } -func expandJobDefinitionTimeout(item []interface{}) *batch.JobTimeout { - timeout := &batch.JobTimeout{} - data := item[0].(map[string]interface{}) +func expandBatchJobTimeout(tfMap map[string]interface{}) *batch.JobTimeout { + if tfMap == nil { + return nil + } + + apiObject := &batch.JobTimeout{} - if v, ok := data["attempt_duration_seconds"].(int); ok && v >= 60 { - timeout.AttemptDurationSeconds = aws.Int64(int64(v)) + if v, ok := tfMap["attempt_duration_seconds"].(int); ok && v != 0 { + apiObject.AttemptDurationSeconds = aws.Int64(int64(v)) } - return timeout + return apiObject } -func flattenBatchJobTimeout(item *batch.JobTimeout) []map[string]interface{} { - data := []map[string]interface{}{} - if item != nil && item.AttemptDurationSeconds != nil { - data = append(data, map[string]interface{}{ - "attempt_duration_seconds": int(aws.Int64Value(item.AttemptDurationSeconds)), - }) +func flattenBatchJobTimeout(apiObject *batch.JobTimeout) map[string]interface{} { + if apiObject == nil { + return nil } - return data + + tfMap := map[string]interface{}{} + + if v := apiObject.AttemptDurationSeconds; v != nil { + tfMap["attempt_duration_seconds"] = aws.Int64Value(v) + } + + return tfMap } diff --git a/aws/resource_aws_cloudformation_stack.go b/aws/resource_aws_cloudformation_stack.go index c59fdeac6dd..044683076d6 100644 --- a/aws/resource_aws_cloudformation_stack.go +++ b/aws/resource_aws_cloudformation_stack.go @@ -261,6 +261,12 @@ func resourceAwsCloudFormationStackRead(d *schema.ResourceData, meta interface{} } if stack.DisableRollback != nil { d.Set("disable_rollback", stack.DisableRollback) + + // takes into account that disable_rollback conflicts with on_failure and + // prevents forced new creation if disable_rollback is reset during refresh + if d.Get("on_failure") != nil { + d.Set("disable_rollback", false) + } } if len(stack.NotificationARNs) > 0 { err = d.Set("notification_arns", flattenStringSet(stack.NotificationARNs)) diff --git a/aws/resource_aws_cloudformation_stack_test.go b/aws/resource_aws_cloudformation_stack_test.go index e967b8ca8d2..fc9ac715632 100644 --- a/aws/resource_aws_cloudformation_stack_test.go +++ b/aws/resource_aws_cloudformation_stack_test.go @@ -77,7 +77,7 @@ func testSweepCloudformationStacks(region string) error { func TestAccAWSCloudFormationStack_basic(t *testing.T) { var stack cloudformation.Stack - stackName := acctest.RandomWithPrefix("tf-acc-test-basic") + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -87,10 +87,10 @@ func TestAccAWSCloudFormationStack_basic(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig(stackName), + Config: testAccAWSCloudFormationStackConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), - resource.TestCheckResourceAttr(resourceName, "name", stackName), + resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckNoResourceAttr(resourceName, "on_failure"), ), }, @@ -104,7 +104,7 @@ func TestAccAWSCloudFormationStack_basic(t *testing.T) { } func TestAccAWSCloudFormationStack_CreationFailure_DoNothing(t *testing.T) { - stackName := acctest.RandomWithPrefix("tf-acc-test") + rName := acctest.RandomWithPrefix("tf-acc-test") resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -113,7 +113,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_DoNothing(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfigCreationFailure(stackName, cloudformation.OnFailureDoNothing), + Config: testAccAWSCloudFormationStackConfigCreationFailure(rName, cloudformation.OnFailureDoNothing), ExpectError: regexp.MustCompile(`failed to create CloudFormation stack \(CREATE_FAILED\).*The following resource\(s\) failed to create.*This is not a valid CIDR block`), }, }, @@ -121,7 +121,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_DoNothing(t *testing.T) { } func TestAccAWSCloudFormationStack_CreationFailure_Delete(t *testing.T) { - stackName := acctest.RandomWithPrefix("tf-acc-test") + rName := acctest.RandomWithPrefix("tf-acc-test") resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -130,7 +130,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_Delete(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfigCreationFailure(stackName, cloudformation.OnFailureDelete), + Config: testAccAWSCloudFormationStackConfigCreationFailure(rName, cloudformation.OnFailureDelete), ExpectError: regexp.MustCompile(`failed to create CloudFormation stack, delete requested \(DELETE_COMPLETE\).*The following resource\(s\) failed to create.*This is not a valid CIDR block`), }, }, @@ -138,7 +138,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_Delete(t *testing.T) { } func TestAccAWSCloudFormationStack_CreationFailure_Rollback(t *testing.T) { - stackName := acctest.RandomWithPrefix("tf-acc-test") + rName := acctest.RandomWithPrefix("tf-acc-test") resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -147,7 +147,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_Rollback(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfigCreationFailure(stackName, cloudformation.OnFailureRollback), + Config: testAccAWSCloudFormationStackConfigCreationFailure(rName, cloudformation.OnFailureRollback), ExpectError: regexp.MustCompile(`failed to create CloudFormation stack, rollback requested \(ROLLBACK_COMPLETE\).*The following resource\(s\) failed to create.*This is not a valid CIDR block`), }, }, @@ -156,7 +156,7 @@ func TestAccAWSCloudFormationStack_CreationFailure_Rollback(t *testing.T) { func TestAccAWSCloudFormationStack_UpdateFailure(t *testing.T) { var stack cloudformation.Stack - stackName := acctest.RandomWithPrefix("tf-acc-test") + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" vpcCidrInitial := "10.0.0.0/16" @@ -169,13 +169,13 @@ func TestAccAWSCloudFormationStack_UpdateFailure(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig_withParams(stackName, vpcCidrInitial), + Config: testAccAWSCloudFormationStackConfig_withParams(rName, vpcCidrInitial), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), ), }, { - Config: testAccAWSCloudFormationStackConfig_withParams(stackName, vpcCidrInvalid), + Config: testAccAWSCloudFormationStackConfig_withParams(rName, vpcCidrInvalid), ExpectError: regexp.MustCompile(`failed to update CloudFormation stack \(UPDATE_ROLLBACK_COMPLETE\).*This is not a valid CIDR block`), }, }, @@ -184,7 +184,7 @@ func TestAccAWSCloudFormationStack_UpdateFailure(t *testing.T) { func TestAccAWSCloudFormationStack_disappears(t *testing.T) { var stack cloudformation.Stack - stackName := fmt.Sprintf("tf-acc-test-basic-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -194,7 +194,7 @@ func TestAccAWSCloudFormationStack_disappears(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig(stackName), + Config: testAccAWSCloudFormationStackConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), testAccCheckResourceDisappears(testAccProvider, resourceAwsCloudFormationStack(), resourceName), @@ -207,7 +207,7 @@ func TestAccAWSCloudFormationStack_disappears(t *testing.T) { func TestAccAWSCloudFormationStack_yaml(t *testing.T) { var stack cloudformation.Stack - stackName := fmt.Sprintf("tf-acc-test-yaml-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -217,7 +217,7 @@ func TestAccAWSCloudFormationStack_yaml(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig_yaml(stackName), + Config: testAccAWSCloudFormationStackConfig_yaml(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), ), @@ -233,7 +233,7 @@ func TestAccAWSCloudFormationStack_yaml(t *testing.T) { func TestAccAWSCloudFormationStack_defaultParams(t *testing.T) { var stack cloudformation.Stack - stackName := fmt.Sprintf("tf-acc-test-default-params-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -243,7 +243,7 @@ func TestAccAWSCloudFormationStack_defaultParams(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig_defaultParams(stackName), + Config: testAccAWSCloudFormationStackConfig_defaultParams(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), ), @@ -260,7 +260,7 @@ func TestAccAWSCloudFormationStack_defaultParams(t *testing.T) { func TestAccAWSCloudFormationStack_allAttributes(t *testing.T) { var stack cloudformation.Stack - stackName := fmt.Sprintf("tf-acc-test-all-attributes-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" expectedPolicyBody := "{\"Statement\":[{\"Action\":\"Update:*\",\"Effect\":\"Deny\",\"Principal\":\"*\",\"Resource\":\"LogicalResourceId/StaticVPC\"},{\"Action\":\"Update:*\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Resource\":\"*\"}]}" @@ -271,10 +271,10 @@ func TestAccAWSCloudFormationStack_allAttributes(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig_allAttributesWithBodies(stackName), + Config: testAccAWSCloudFormationStackConfig_allAttributesWithBodies(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), - resource.TestCheckResourceAttr(resourceName, "name", stackName), + resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "capabilities.#", "1"), resource.TestCheckTypeSetElemAttr(resourceName, "capabilities.*", "CAPABILITY_IAM"), resource.TestCheckResourceAttr(resourceName, "disable_rollback", "false"), @@ -295,10 +295,10 @@ func TestAccAWSCloudFormationStack_allAttributes(t *testing.T) { ImportStateVerifyIgnore: []string{"on_failure", "parameters", "policy_body"}, }, { - Config: testAccAWSCloudFormationStackConfig_allAttributesWithBodies_modified(stackName), + Config: testAccAWSCloudFormationStackConfig_allAttributesWithBodies_modified(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), - resource.TestCheckResourceAttr(resourceName, "name", stackName), + resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "capabilities.#", "1"), resource.TestCheckTypeSetElemAttr(resourceName, "capabilities.*", "CAPABILITY_IAM"), resource.TestCheckResourceAttr(resourceName, "disable_rollback", "false"), @@ -319,7 +319,7 @@ func TestAccAWSCloudFormationStack_allAttributes(t *testing.T) { // Regression for https://github.com/hashicorp/terraform/issues/4332 func TestAccAWSCloudFormationStack_withParams(t *testing.T) { var stack cloudformation.Stack - stackName := fmt.Sprintf("tf-acc-test-with-params-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" vpcCidrInitial := "10.0.0.0/16" @@ -332,7 +332,7 @@ func TestAccAWSCloudFormationStack_withParams(t *testing.T) { CheckDestroy: testAccCheckAWSCloudFormationDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSCloudFormationStackConfig_withParams(stackName, vpcCidrInitial), + Config: testAccAWSCloudFormationStackConfig_withParams(rName, vpcCidrInitial), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), resource.TestCheckResourceAttr(resourceName, "parameters.%", "1"), @@ -346,7 +346,7 @@ func TestAccAWSCloudFormationStack_withParams(t *testing.T) { ImportStateVerifyIgnore: []string{"on_failure", "parameters"}, }, { - Config: testAccAWSCloudFormationStackConfig_withParams(stackName, vpcCidrUpdated), + Config: testAccAWSCloudFormationStackConfig_withParams(rName, vpcCidrUpdated), Check: resource.ComposeTestCheckFunc( testAccCheckCloudFormationStackExists(resourceName, &stack), resource.TestCheckResourceAttr(resourceName, "parameters.%", "1"), @@ -360,7 +360,7 @@ func TestAccAWSCloudFormationStack_withParams(t *testing.T) { // Regression for https://github.com/hashicorp/terraform/issues/4534 func TestAccAWSCloudFormationStack_withUrl_withParams(t *testing.T) { var stack cloudformation.Stack - rName := fmt.Sprintf("tf-acc-test-with-url-and-params-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -393,7 +393,7 @@ func TestAccAWSCloudFormationStack_withUrl_withParams(t *testing.T) { func TestAccAWSCloudFormationStack_withUrl_withParams_withYaml(t *testing.T) { var stack cloudformation.Stack - rName := fmt.Sprintf("tf-acc-test-with-params-and-yaml-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -421,7 +421,7 @@ func TestAccAWSCloudFormationStack_withUrl_withParams_withYaml(t *testing.T) { // Test for https://github.com/hashicorp/terraform/issues/5653 func TestAccAWSCloudFormationStack_withUrl_withParams_noUpdate(t *testing.T) { var stack cloudformation.Stack - rName := fmt.Sprintf("tf-acc-test-with-params-no-update-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ @@ -454,7 +454,8 @@ func TestAccAWSCloudFormationStack_withUrl_withParams_noUpdate(t *testing.T) { func TestAccAWSCloudFormationStack_withTransform(t *testing.T) { var stack cloudformation.Stack - rName := fmt.Sprintf("tf-acc-test-with-transform-%s", acctest.RandString(10)) + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_cloudformation_stack.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t) }, @@ -465,14 +466,38 @@ func TestAccAWSCloudFormationStack_withTransform(t *testing.T) { { Config: testAccAWSCloudFormationStackConfig_withTransform(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckCloudFormationStackExists("aws_cloudformation_stack.with-transform", &stack), + testAccCheckCloudFormationStackExists(resourceName, &stack), ), }, { PlanOnly: true, Config: testAccAWSCloudFormationStackConfig_withTransform(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckCloudFormationStackExists("aws_cloudformation_stack.with-transform", &stack), + testAccCheckCloudFormationStackExists(resourceName, &stack), + ), + }, + }, + }) +} + +// TestAccAWSCloudFormationStack_onFailure verifies https://github.com/hashicorp/terraform-provider-aws/issues/5204 +func TestAccAWSCloudFormationStack_onFailure(t *testing.T) { + var stack cloudformation.Stack + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_cloudformation_stack.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, cloudformation.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSCloudFormationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSCloudFormationStackConfig_onFailure(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudFormationStackExists(resourceName, &stack), + resource.TestCheckResourceAttr(resourceName, "disable_rollback", "false"), + resource.TestCheckResourceAttr(resourceName, "on_failure", cloudformation.OnFailureDoNothing), ), }, }, @@ -565,10 +590,10 @@ func testAccCheckCloudFormationStackDisappears(stack *cloudformation.Stack) reso } } -func testAccAWSCloudFormationStackConfig(stackName string) string { +func testAccAWSCloudFormationStackConfig(rName string) string { return fmt.Sprintf(` resource "aws_cloudformation_stack" "test" { - name = "%[1]s" + name = %[1]q template_body = < 0 || len(given.Dimensions) > 0 { + e, g := aws.StringValueMap(expected.Dimensions), aws.StringValueMap(given.Dimensions) + + if len(e) != len(g) { + return fmt.Errorf("Expected %d dimensions, received %d", len(e), len(g)) + } + + for ek, ev := range e { + gv, ok := g[ek] + if !ok { + return fmt.Errorf("Expected dimension %s, received nothing", ek) + } + if gv != ev { + return fmt.Errorf("Expected dimension %s to be %s, received %s", ek, ev, gv) + } + } + } + return nil } } @@ -231,6 +277,35 @@ resource "aws_cloudwatch_log_group" "dada" { `, rInt, rInt) } +func testAccAWSCloudWatchLogMetricFilterConfigModifiedWithDimensions(rInt int) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_log_metric_filter" "foobar" { + name = "MyAppAccessCount-%d" + + pattern = < 0 { - metricQueries := make([]interface{}, len(resp.Metrics)) - for i, mq := range resp.Metrics { - metricQuery := map[string]interface{}{ - "expression": aws.StringValue(mq.Expression), - "id": aws.StringValue(mq.Id), - "label": aws.StringValue(mq.Label), - "return_data": aws.BoolValue(mq.ReturnData), - } - if mq.MetricStat != nil { - metric := map[string]interface{}{ - "metric_name": aws.StringValue(mq.MetricStat.Metric.MetricName), - "namespace": aws.StringValue(mq.MetricStat.Metric.Namespace), - "period": int(aws.Int64Value(mq.MetricStat.Period)), - "stat": aws.StringValue(mq.MetricStat.Stat), - "unit": aws.StringValue(mq.MetricStat.Unit), - "dimensions": flattenDimensions(mq.MetricStat.Metric.Dimensions), - } - metricQuery["metric"] = []interface{}{metric} - } - metricQueries[i] = metricQuery - } - if err := d.Set("metric_query", metricQueries); err != nil { - return fmt.Errorf("error setting metric_query: %s", err) + if err := d.Set("metric_query", flattenAwsCloudWatchMetricAlarmMetrics(resp.Metrics)); err != nil { + return fmt.Errorf("error setting metric_query: %w", err) } } if err := d.Set("ok_actions", flattenStringSet(resp.OKActions)); err != nil { - log.Printf("[WARN] Error setting OK Actions: %s", err) + return fmt.Errorf("error setting OK Actions: %w", err) } + d.Set("period", resp.Period) d.Set("statistic", resp.Statistic) d.Set("threshold", resp.Threshold) @@ -370,7 +352,7 @@ func resourceAwsCloudWatchMetricAlarmRead(d *schema.ResourceData, meta interface tags, err := keyvaluetags.CloudwatchListTags(conn, arn) if err != nil { - return fmt.Errorf("error listing tags for CloudWatch Metric Alarm (%s): %s", arn, err) + return fmt.Errorf("error listing tags for CloudWatch Metric Alarm (%s): %w", arn, err) } tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) @@ -394,7 +376,7 @@ func resourceAwsCloudWatchMetricAlarmUpdate(d *schema.ResourceData, meta interfa log.Printf("[DEBUG] Updating CloudWatch Metric Alarm: %#v", params) _, err := conn.PutMetricAlarm(¶ms) if err != nil { - return fmt.Errorf("Updating metric alarm failed: %s", err) + return fmt.Errorf("Updating metric alarm failed: %w", err) } log.Println("[INFO] CloudWatch Metric Alarm updated") @@ -403,7 +385,7 @@ func resourceAwsCloudWatchMetricAlarmUpdate(d *schema.ResourceData, meta interfa o, n := d.GetChange("tags_all") if err := keyvaluetags.CloudwatchUpdateTags(conn, arn, o, n); err != nil { - return fmt.Errorf("error updating CloudWatch Metric Alarm (%s) tags: %s", arn, err) + return fmt.Errorf("error updating CloudWatch Metric Alarm (%s) tags: %w", arn, err) } } @@ -422,7 +404,7 @@ func resourceAwsCloudWatchMetricAlarmDelete(d *schema.ResourceData, meta interfa if isAWSErr(err, cloudwatch.ErrCodeResourceNotFoundException, "") { return nil } - return fmt.Errorf("Error deleting CloudWatch Metric Alarm: %s", err) + return fmt.Errorf("Error deleting CloudWatch Metric Alarm: %w", err) } log.Println("[INFO] CloudWatch Metric Alarm deleted") @@ -494,105 +476,123 @@ func getAwsCloudWatchPutMetricAlarmInput(d *schema.ResourceData, meta interface{ params.InsufficientDataActions = expandStringSet(v.(*schema.Set)) } - var metrics []*cloudwatch.MetricDataQuery if v := d.Get("metric_query"); v != nil { - for _, v := range v.(*schema.Set).List() { - metricQueryResource := v.(map[string]interface{}) - id := metricQueryResource["id"].(string) - if id == "" { - continue - } - metricQuery := cloudwatch.MetricDataQuery{ - Id: aws.String(id), - } - if v, ok := metricQueryResource["expression"]; ok && v.(string) != "" { - metricQuery.Expression = aws.String(v.(string)) - } - if v, ok := metricQueryResource["label"]; ok && v.(string) != "" { - metricQuery.Label = aws.String(v.(string)) - } - if v, ok := metricQueryResource["return_data"]; ok { - metricQuery.ReturnData = aws.Bool(v.(bool)) - } - if v := metricQueryResource["metric"]; v != nil { - for _, v := range v.([]interface{}) { - metricResource := v.(map[string]interface{}) - metric := cloudwatch.Metric{ - MetricName: aws.String(metricResource["metric_name"].(string)), - } - metricStat := cloudwatch.MetricStat{ - Metric: &metric, - Stat: aws.String(metricResource["stat"].(string)), - } - if v, ok := metricResource["namespace"]; ok && v.(string) != "" { - metric.Namespace = aws.String(v.(string)) - } - if v, ok := metricResource["period"]; ok { - metricStat.Period = aws.Int64(int64(v.(int))) - } - if v, ok := metricResource["unit"]; ok && v.(string) != "" { - metricStat.Unit = aws.String(v.(string)) - } - a := metricResource["dimensions"].(map[string]interface{}) - dimensions := make([]*cloudwatch.Dimension, 0, len(a)) - for k, v := range a { - dimensions = append(dimensions, &cloudwatch.Dimension{ - Name: aws.String(k), - Value: aws.String(v.(string)), - }) - } - metric.Dimensions = dimensions - metricQuery.MetricStat = &metricStat - } - } - metrics = append(metrics, &metricQuery) - } - params.Metrics = metrics + params.Metrics = expandCloudWatchMetricAlarmMetrics(v.(*schema.Set)) } if v, ok := d.GetOk("ok_actions"); ok { params.OKActions = expandStringSet(v.(*schema.Set)) } - a := d.Get("dimensions").(map[string]interface{}) - var dimensions []*cloudwatch.Dimension - for k, v := range a { - dimensions = append(dimensions, &cloudwatch.Dimension{ - Name: aws.String(k), - Value: aws.String(v.(string)), - }) + if v, ok := d.GetOk("dimensions"); ok { + params.Dimensions = expandAwsCloudWatchMetricAlarmDimensions(v.(map[string]interface{})) } - params.Dimensions = dimensions return params } -func getAwsCloudWatchMetricAlarm(d *schema.ResourceData, meta interface{}) (*cloudwatch.MetricAlarm, error) { - conn := meta.(*AWSClient).cloudwatchconn +func flattenAwsCloudWatchMetricAlarmDimensions(dims []*cloudwatch.Dimension) map[string]interface{} { + flatDims := make(map[string]interface{}) + for _, d := range dims { + flatDims[aws.StringValue(d.Name)] = aws.StringValue(d.Value) + } + return flatDims +} - params := cloudwatch.DescribeAlarmsInput{ - AlarmNames: []*string{aws.String(d.Id())}, +func flattenAwsCloudWatchMetricAlarmMetrics(metrics []*cloudwatch.MetricDataQuery) []map[string]interface{} { + metricQueries := make([]map[string]interface{}, 0) + for _, mq := range metrics { + metricQuery := map[string]interface{}{ + "expression": aws.StringValue(mq.Expression), + "id": aws.StringValue(mq.Id), + "label": aws.StringValue(mq.Label), + "return_data": aws.BoolValue(mq.ReturnData), + } + if mq.MetricStat != nil { + metric := flattenAwsCloudWatchMetricAlarmMetricsMetricStat(mq.MetricStat) + metricQuery["metric"] = []interface{}{metric} + } + metricQueries = append(metricQueries, metricQuery) } - resp, err := conn.DescribeAlarms(¶ms) - if err != nil { - return nil, err + return metricQueries +} + +func flattenAwsCloudWatchMetricAlarmMetricsMetricStat(ms *cloudwatch.MetricStat) map[string]interface{} { + msm := ms.Metric + metric := map[string]interface{}{ + "metric_name": aws.StringValue(msm.MetricName), + "namespace": aws.StringValue(msm.Namespace), + "period": int(aws.Int64Value(ms.Period)), + "stat": aws.StringValue(ms.Stat), + "unit": aws.StringValue(ms.Unit), + "dimensions": flattenAwsCloudWatchMetricAlarmDimensions(msm.Dimensions), } - // Find it and return it - for idx, ma := range resp.MetricAlarms { - if aws.StringValue(ma.AlarmName) == d.Id() { - return resp.MetricAlarms[idx], nil + return metric +} + +func expandCloudWatchMetricAlarmMetrics(v *schema.Set) []*cloudwatch.MetricDataQuery { + var metrics []*cloudwatch.MetricDataQuery + + for _, v := range v.List() { + metricQueryResource := v.(map[string]interface{}) + id := metricQueryResource["id"].(string) + if id == "" { + continue + } + metricQuery := cloudwatch.MetricDataQuery{ + Id: aws.String(id), + } + if v, ok := metricQueryResource["expression"]; ok && v.(string) != "" { + metricQuery.Expression = aws.String(v.(string)) + } + if v, ok := metricQueryResource["label"]; ok && v.(string) != "" { + metricQuery.Label = aws.String(v.(string)) + } + if v, ok := metricQueryResource["return_data"]; ok { + metricQuery.ReturnData = aws.Bool(v.(bool)) } + if v := metricQueryResource["metric"]; v != nil && len(v.([]interface{})) > 0 { + metricQuery.MetricStat = expandCloudWatchMetricAlarmMetricsMetric(v.([]interface{})) + } + metrics = append(metrics, &metricQuery) + } + return metrics +} + +func expandCloudWatchMetricAlarmMetricsMetric(v []interface{}) *cloudwatch.MetricStat { + metricResource := v[0].(map[string]interface{}) + metric := cloudwatch.Metric{ + MetricName: aws.String(metricResource["metric_name"].(string)), + } + metricStat := cloudwatch.MetricStat{ + Metric: &metric, + Stat: aws.String(metricResource["stat"].(string)), + } + if v, ok := metricResource["namespace"]; ok && v.(string) != "" { + metric.Namespace = aws.String(v.(string)) + } + if v, ok := metricResource["period"]; ok { + metricStat.Period = aws.Int64(int64(v.(int))) + } + if v, ok := metricResource["unit"]; ok && v.(string) != "" { + metricStat.Unit = aws.String(v.(string)) + } + if v, ok := metricResource["dimensions"]; ok { + metric.Dimensions = expandAwsCloudWatchMetricAlarmDimensions(v.(map[string]interface{})) } - return nil, nil + return &metricStat } -func flattenDimensions(dims []*cloudwatch.Dimension) map[string]interface{} { - flatDims := make(map[string]interface{}) - for _, d := range dims { - flatDims[aws.StringValue(d.Name)] = aws.StringValue(d.Value) +func expandAwsCloudWatchMetricAlarmDimensions(dims map[string]interface{}) []*cloudwatch.Dimension { + var dimensions []*cloudwatch.Dimension + for k, v := range dims { + dimensions = append(dimensions, &cloudwatch.Dimension{ + Name: aws.String(k), + Value: aws.String(v.(string)), + }) } - return flatDims + return dimensions } diff --git a/aws/resource_aws_cloudwatch_metric_alarm_test.go b/aws/resource_aws_cloudwatch_metric_alarm_test.go index e706a43b02f..e6c15d1f84a 100644 --- a/aws/resource_aws_cloudwatch_metric_alarm_test.go +++ b/aws/resource_aws_cloudwatch_metric_alarm_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/cloudwatch/finder" ) func TestAccAWSCloudWatchMetricAlarm_basic(t *testing.T) { @@ -30,8 +31,18 @@ func TestAccAWSCloudWatchMetricAlarm_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "metric_name", "CPUUtilization"), resource.TestCheckResourceAttr(resourceName, "statistic", "Average"), testAccMatchResourceAttrRegionalARN(resourceName, "arn", "cloudwatch", regexp.MustCompile(`alarm:.+`)), - testAccCheckCloudWatchMetricAlarmDimension(resourceName, "InstanceId", "i-abc123"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "alarm_description", "This metric monitors ec2 cpu utilization"), + resource.TestCheckResourceAttr(resourceName, "threshold", "80"), + resource.TestCheckResourceAttr(resourceName, "period", "120"), + resource.TestCheckResourceAttr(resourceName, "namespace", "AWS/EC2"), + resource.TestCheckResourceAttr(resourceName, "alarm_name", rName), + resource.TestCheckResourceAttr(resourceName, "comparison_operator", "GreaterThanOrEqualToThreshold"), + resource.TestCheckResourceAttr(resourceName, "datapoints_to_alarm", "0"), + resource.TestCheckResourceAttr(resourceName, "evaluation_periods", "2"), + resource.TestCheckResourceAttr(resourceName, "insufficient_data_actions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "dimensions.%", "1"), + resource.TestCheckResourceAttr(resourceName, "dimensions.InstanceId", "i-abc123"), ), }, { @@ -390,24 +401,6 @@ func TestAccAWSCloudWatchMetricAlarm_disappears(t *testing.T) { }) } -func testAccCheckCloudWatchMetricAlarmDimension(n, k, v string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } - key := fmt.Sprintf("dimensions.%s", k) - val, ok := rs.Primary.Attributes[key] - if !ok { - return fmt.Errorf("Could not find dimension: %s", k) - } - if val != v { - return fmt.Errorf("Expected dimension %s => %s; got: %s", k, v, val) - } - return nil - } -} - func testAccCheckCloudWatchMetricAlarmExists(n string, alarm *cloudwatch.MetricAlarm) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -416,17 +409,14 @@ func testAccCheckCloudWatchMetricAlarmExists(n string, alarm *cloudwatch.MetricA } conn := testAccProvider.Meta().(*AWSClient).cloudwatchconn - params := cloudwatch.DescribeAlarmsInput{ - AlarmNames: []*string{aws.String(rs.Primary.ID)}, - } - resp, err := conn.DescribeAlarms(¶ms) + resp, err := finder.MetricAlarmByName(conn, rs.Primary.ID) if err != nil { return err } - if len(resp.MetricAlarms) == 0 { + if resp == nil { return fmt.Errorf("Alarm not found") } - *alarm = *resp.MetricAlarms[0] + *alarm = *resp return nil } @@ -440,15 +430,9 @@ func testAccCheckAWSCloudWatchMetricAlarmDestroy(s *terraform.State) error { continue } - params := cloudwatch.DescribeAlarmsInput{ - AlarmNames: []*string{aws.String(rs.Primary.ID)}, - } - - resp, err := conn.DescribeAlarms(¶ms) - + resp, err := finder.MetricAlarmByName(conn, rs.Primary.ID) if err == nil { - if len(resp.MetricAlarms) != 0 && - *resp.MetricAlarms[0].AlarmName == rs.Primary.ID { + if resp != nil && aws.StringValue(resp.AlarmName) == rs.Primary.ID { return fmt.Errorf("Alarm Still Exists: %s", rs.Primary.ID) } } diff --git a/aws/resource_aws_devicefarm_project.go b/aws/resource_aws_devicefarm_project.go index 4fc1249f96e..643975323f5 100644 --- a/aws/resource_aws_devicefarm_project.go +++ b/aws/resource_aws_devicefarm_project.go @@ -7,6 +7,8 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/devicefarm" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" ) func resourceAwsDevicefarmProject() *schema.Resource { @@ -26,34 +28,58 @@ func resourceAwsDevicefarmProject() *schema.Resource { }, "name": { - Type: schema.TypeString, - Required: true, + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(0, 256), + }, + "default_job_timeout_minutes": { + Type: schema.TypeInt, + Optional: true, }, + "tags": tagsSchema(), + "tags_all": tagsSchemaComputed(), }, + CustomizeDiff: SetTagsDiff, } } func resourceAwsDevicefarmProjectCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).devicefarmconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + name := d.Get("name").(string) input := &devicefarm.CreateProjectInput{ - Name: aws.String(d.Get("name").(string)), + Name: aws.String(name), + } + + if v, ok := d.GetOk("default_job_timeout_minutes"); ok { + input.DefaultJobTimeoutMinutes = aws.Int64(int64(v.(int))) } - log.Printf("[DEBUG] Creating DeviceFarm Project: %s", d.Get("name").(string)) + log.Printf("[DEBUG] Creating DeviceFarm Project: %s", name) out, err := conn.CreateProject(input) if err != nil { - return fmt.Errorf("Error creating DeviceFarm Project: %s", err) + return fmt.Errorf("Error creating DeviceFarm Project: %w", err) } - log.Printf("[DEBUG] Successsfully Created DeviceFarm Project: %s", *out.Project.Arn) - d.SetId(aws.StringValue(out.Project.Arn)) + arn := aws.StringValue(out.Project.Arn) + log.Printf("[DEBUG] Successsfully Created DeviceFarm Project: %s", arn) + d.SetId(arn) + + if len(tags) > 0 { + if err := keyvaluetags.DevicefarmUpdateTags(conn, arn, nil, tags); err != nil { + return fmt.Errorf("error updating DeviceFarm Project (%s) tags: %w", arn, err) + } + } return resourceAwsDevicefarmProjectRead(d, meta) } func resourceAwsDevicefarmProjectRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).devicefarmconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig input := &devicefarm.GetProjectInput{ Arn: aws.String(d.Id()), @@ -67,11 +93,31 @@ func resourceAwsDevicefarmProjectRead(d *schema.ResourceData, meta interface{}) d.SetId("") return nil } - return fmt.Errorf("Error reading DeviceFarm Project: %s", err) + return fmt.Errorf("Error reading DeviceFarm Project: %w", err) + } + + project := out.Project + arn := aws.StringValue(project.Arn) + d.Set("name", project.Name) + d.Set("arn", arn) + d.Set("default_job_timeout_minutes", project.DefaultJobTimeoutMinutes) + + tags, err := keyvaluetags.DevicefarmListTags(conn, arn) + + if err != nil { + return fmt.Errorf("error listing tags for DeviceFarm Project (%s): %w", arn, err) + } + + tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) } - d.Set("name", out.Project.Name) - d.Set("arn", out.Project.Arn) + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } return nil } @@ -79,18 +125,32 @@ func resourceAwsDevicefarmProjectRead(d *schema.ResourceData, meta interface{}) func resourceAwsDevicefarmProjectUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).devicefarmconn - if d.HasChange("name") { + if d.HasChangesExcept("tags", "tags_all") { input := &devicefarm.UpdateProjectInput{ - Arn: aws.String(d.Id()), - Name: aws.String(d.Get("name").(string)), + Arn: aws.String(d.Id()), + } + + if d.HasChange("name") { + input.Name = aws.String(d.Get("name").(string)) + } + + if d.HasChange("default_job_timeout_minutes") { + input.DefaultJobTimeoutMinutes = aws.Int64(int64(d.Get("default_job_timeout_minutes").(int))) } log.Printf("[DEBUG] Updating DeviceFarm Project: %s", d.Id()) _, err := conn.UpdateProject(input) if err != nil { - return fmt.Errorf("Error Updating DeviceFarm Project: %s", err) + return fmt.Errorf("Error Updating DeviceFarm Project: %w", err) } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.DevicefarmUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating DeviceFarm Project (%s) tags: %w", d.Get("arn").(string), err) + } } return resourceAwsDevicefarmProjectRead(d, meta) @@ -106,7 +166,10 @@ func resourceAwsDevicefarmProjectDelete(d *schema.ResourceData, meta interface{} log.Printf("[DEBUG] Deleting DeviceFarm Project: %s", d.Id()) _, err := conn.DeleteProject(input) if err != nil { - return fmt.Errorf("Error deleting DeviceFarm Project: %s", err) + if isAWSErr(err, devicefarm.ErrCodeNotFoundException, "") { + return nil + } + return fmt.Errorf("Error deleting DeviceFarm Project: %w", err) } return nil diff --git a/aws/resource_aws_devicefarm_project_test.go b/aws/resource_aws_devicefarm_project_test.go index 41844b75407..48f831ac0e6 100644 --- a/aws/resource_aws_devicefarm_project_test.go +++ b/aws/resource_aws_devicefarm_project_test.go @@ -16,6 +16,7 @@ import ( func TestAccAWSDeviceFarmProject_basic(t *testing.T) { var proj devicefarm.Project rName := acctest.RandomWithPrefix("tf-acc-test") + rNameUpdated := acctest.RandomWithPrefix("tf-acc-test-updated") resourceName := "aws_devicefarm_project.test" resource.ParallelTest(t, resource.TestCase{ @@ -35,6 +36,7 @@ func TestAccAWSDeviceFarmProject_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckDeviceFarmProjectExists(resourceName, &proj), resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), testAccMatchResourceAttrRegionalARN(resourceName, "arn", "devicefarm", regexp.MustCompile(`project:.+`)), ), }, @@ -43,6 +45,107 @@ func TestAccAWSDeviceFarmProject_basic(t *testing.T) { ImportState: true, ImportStateVerify: true, }, + { + Config: testAccDeviceFarmProjectConfig(rNameUpdated), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "name", rNameUpdated), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "devicefarm", regexp.MustCompile(`project:.+`)), + ), + }, + }, + }) +} + +func TestAccAWSDeviceFarmProject_timeout(t *testing.T) { + var proj devicefarm.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_devicefarm_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + testAccPartitionHasServicePreCheck(devicefarm.EndpointsID, t) + // Currently, DeviceFarm is only supported in us-west-2 + // https://docs.aws.amazon.com/general/latest/gr/devicefarm.html + testAccRegionPreCheck(t, endpoints.UsWest2RegionID) + }, + ErrorCheck: testAccErrorCheck(t, devicefarm.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckDeviceFarmProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDeviceFarmProjectConfigDefaultJobTimeout(rName, 10), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "default_job_timeout_minutes", "10"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDeviceFarmProjectConfigDefaultJobTimeout(rName, 20), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "default_job_timeout_minutes", "20"), + ), + }, + }, + }) +} + +func TestAccAWSDeviceFarmProject_tags(t *testing.T) { + var proj devicefarm.Project + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_devicefarm_project.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + testAccPartitionHasServicePreCheck(devicefarm.EndpointsID, t) + // Currently, DeviceFarm is only supported in us-west-2 + // https://docs.aws.amazon.com/general/latest/gr/devicefarm.html + testAccRegionPreCheck(t, endpoints.UsWest2RegionID) + }, + ErrorCheck: testAccErrorCheck(t, devicefarm.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckDeviceFarmProjectDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDeviceFarmProjectConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccDeviceFarmProjectConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccDeviceFarmProjectConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDeviceFarmProjectExists(resourceName, &proj), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, }, }) } @@ -137,3 +240,37 @@ resource "aws_devicefarm_project" "test" { } `, rName) } + +func testAccDeviceFarmProjectConfigDefaultJobTimeout(rName string, timeout int) string { + return fmt.Sprintf(` +resource "aws_devicefarm_project" "test" { + name = %[1]q + default_job_timeout_minutes = %[2]d +} +`, rName, timeout) +} + +func testAccDeviceFarmProjectConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_devicefarm_project" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccDeviceFarmProjectConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_devicefarm_project" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/aws/resource_aws_ec2_capacity_reservation.go b/aws/resource_aws_ec2_capacity_reservation.go index 57e491bebf6..9d20620db3f 100644 --- a/aws/resource_aws_ec2_capacity_reservation.go +++ b/aws/resource_aws_ec2_capacity_reservation.go @@ -98,6 +98,11 @@ func resourceAwsEc2CapacityReservation() *schema.Resource { Required: true, ForceNew: true, }, + "outpost_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validateArn, + }, "owner_id": { Type: schema.TypeString, Computed: true, @@ -156,6 +161,10 @@ func resourceAwsEc2CapacityReservationCreate(d *schema.ResourceData, meta interf opts.InstanceMatchCriteria = aws.String(v.(string)) } + if v, ok := d.GetOk("outpost_arn"); ok { + opts.OutpostArn = aws.String(v.(string)) + } + if v, ok := d.GetOk("tenancy"); ok { opts.Tenancy = aws.String(v.(string)) } @@ -214,6 +223,7 @@ func resourceAwsEc2CapacityReservationRead(d *schema.ResourceData, meta interfac d.Set("instance_match_criteria", reservation.InstanceMatchCriteria) d.Set("instance_platform", reservation.InstancePlatform) d.Set("instance_type", reservation.InstanceType) + d.Set("outpost_arn", reservation.OutpostArn) d.Set("owner_id", reservation.OwnerId) tags := keyvaluetags.Ec2KeyValueTags(reservation.Tags).IgnoreAws().IgnoreConfig(ignoreTagsConfig) diff --git a/aws/resource_aws_ec2_capacity_reservation_test.go b/aws/resource_aws_ec2_capacity_reservation_test.go index 099a5840796..54b4e7e348d 100644 --- a/aws/resource_aws_ec2_capacity_reservation_test.go +++ b/aws/resource_aws_ec2_capacity_reservation_test.go @@ -89,6 +89,7 @@ func TestAccAWSEc2CapacityReservation_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "instance_match_criteria", "open"), resource.TestCheckResourceAttr(resourceName, "instance_platform", "Linux/UNIX"), resource.TestCheckResourceAttr(resourceName, "instance_type", "t2.micro"), + resource.TestCheckResourceAttr(resourceName, "outpost_arn", ""), testAccCheckResourceAttrAccountID(resourceName, "owner_id"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestCheckResourceAttr(resourceName, "tenancy", "default"), diff --git a/aws/resource_aws_ec2_managed_prefix_list.go b/aws/resource_aws_ec2_managed_prefix_list.go index 1f437358d6f..af36e110f2b 100644 --- a/aws/resource_aws_ec2_managed_prefix_list.go +++ b/aws/resource_aws_ec2_managed_prefix_list.go @@ -242,22 +242,32 @@ func resourceAwsEc2ManagedPrefixListUpdate(d *schema.ResourceData, meta interfac // one with a collection of all description-only removals and the // second one will add them all back. if len(input.AddEntries) > 0 && len(input.RemoveEntries) > 0 { - removalInput := &ec2.ModifyManagedPrefixListInput{ - CurrentVersion: input.CurrentVersion, - PrefixListId: aws.String(d.Id()), - } + descriptionOnlyRemovals := []*ec2.RemovePrefixListEntry{} + removals := []*ec2.RemovePrefixListEntry{} + + for _, removeEntry := range input.RemoveEntries { + inAddAndRemove := false - for idx, removeEntry := range input.RemoveEntries { for _, addEntry := range input.AddEntries { if aws.StringValue(addEntry.Cidr) == aws.StringValue(removeEntry.Cidr) { - removalInput.RemoveEntries = append(removalInput.RemoveEntries, input.RemoveEntries[idx]) - input.RemoveEntries = append(input.RemoveEntries[:idx], input.RemoveEntries[idx+1:]...) + inAddAndRemove = true + break } } + + if inAddAndRemove { + descriptionOnlyRemovals = append(descriptionOnlyRemovals, removeEntry) + } else { + removals = append(removals, removeEntry) + } } - if len(removalInput.RemoveEntries) > 0 { - _, err := conn.ModifyManagedPrefixList(removalInput) + if len(descriptionOnlyRemovals) > 0 { + _, err := conn.ModifyManagedPrefixList(&ec2.ModifyManagedPrefixListInput{ + CurrentVersion: input.CurrentVersion, + PrefixListId: aws.String(d.Id()), + RemoveEntries: descriptionOnlyRemovals, + }) if err != nil { return fmt.Errorf("error updating EC2 Managed Prefix List (%s): %w", d.Id(), err) @@ -274,12 +284,14 @@ func resourceAwsEc2ManagedPrefixListUpdate(d *schema.ResourceData, meta interfac } input.CurrentVersion = managedPrefixList.Version + } + if len(removals) > 0 { + input.RemoveEntries = removals + } else { // Prevent this error if RemoveEntries is list with no elements after removals: // InvalidRequest: The request received was invalid. - if len(input.RemoveEntries) == 0 { - input.RemoveEntries = nil - } + input.RemoveEntries = nil } } diff --git a/aws/resource_aws_ec2_managed_prefix_list_test.go b/aws/resource_aws_ec2_managed_prefix_list_test.go index f9f03499794..6f32c9dec78 100644 --- a/aws/resource_aws_ec2_managed_prefix_list_test.go +++ b/aws/resource_aws_ec2_managed_prefix_list_test.go @@ -170,11 +170,15 @@ func TestAccAwsEc2ManagedPrefixList_Entry_Description(t *testing.T) { ResourceName: resourceName, Check: resource.ComposeAggregateTestCheckFunc( testAccAwsEc2ManagedPrefixListExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "entry.#", "1"), + resource.TestCheckResourceAttr(resourceName, "entry.#", "2"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "entry.*", map[string]string{ "cidr": "1.0.0.0/8", "description": "description1", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "entry.*", map[string]string{ + "cidr": "2.0.0.0/8", + "description": "description1", + }), resource.TestCheckResourceAttr(resourceName, "version", "1"), ), }, @@ -188,11 +192,15 @@ func TestAccAwsEc2ManagedPrefixList_Entry_Description(t *testing.T) { ResourceName: resourceName, Check: resource.ComposeAggregateTestCheckFunc( testAccAwsEc2ManagedPrefixListExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "entry.#", "1"), + resource.TestCheckResourceAttr(resourceName, "entry.#", "2"), resource.TestCheckTypeSetElemNestedAttrs(resourceName, "entry.*", map[string]string{ "cidr": "1.0.0.0/8", "description": "description2", }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "entry.*", map[string]string{ + "cidr": "2.0.0.0/8", + "description": "description2", + }), resource.TestCheckResourceAttr(resourceName, "version", "3"), // description-only updates require two operations ), }, @@ -416,6 +424,11 @@ resource "aws_ec2_managed_prefix_list" "test" { cidr = "1.0.0.0/8" description = %[2]q } + + entry { + cidr = "2.0.0.0/8" + description = %[2]q + } } `, rName, description) } diff --git a/aws/resource_aws_ecs_service.go b/aws/resource_aws_ecs_service.go index 359a5a283d4..c6f3914d0c4 100644 --- a/aws/resource_aws_ecs_service.go +++ b/aws/resource_aws_ecs_service.go @@ -179,14 +179,11 @@ func resourceAwsEcsService() *schema.Resource { Computed: true, }, "launch_type": { - Type: schema.TypeString, - ForceNew: true, - Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice([]string{ - ecs.LaunchTypeEc2, - ecs.LaunchTypeFargate, - }, false), + Type: schema.TypeString, + ForceNew: true, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(ecs.LaunchType_Values(), false), }, "load_balancer": { Type: schema.TypeSet, diff --git a/aws/resource_aws_eks_addon.go b/aws/resource_aws_eks_addon.go index e2ea7dafd86..cae762dc9a3 100644 --- a/aws/resource_aws_eks_addon.go +++ b/aws/resource_aws_eks_addon.go @@ -241,7 +241,9 @@ func resourceAwsEksAddonUpdate(ctx context.Context, d *schema.ResourceData, meta input.AddonVersion = aws.String(d.Get("addon_version").(string)) } - if d.HasChange("service_account_role_arn") { + // If service account role ARN is already provided, use it. Otherwise, the add-on uses + // permissions assigned to the node IAM role. + if d.HasChange("service_account_role_arn") || d.Get("service_account_role_arn").(string) != "" { input.ServiceAccountRoleArn = aws.String(d.Get("service_account_role_arn").(string)) } diff --git a/aws/resource_aws_eks_node_group.go b/aws/resource_aws_eks_node_group.go index 7f4f9b7c62f..17001047f7a 100644 --- a/aws/resource_aws_eks_node_group.go +++ b/aws/resource_aws_eks_node_group.go @@ -3,6 +3,7 @@ package aws import ( "fmt" "log" + "reflect" "strings" "time" @@ -219,6 +220,30 @@ func resourceAwsEksNodeGroup() *schema.Resource { }, "tags": tagsSchema(), "tags_all": tagsSchemaComputed(), + "taint": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 50, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "key": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 63), + }, + "value": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 63), + }, + "effect": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(eks.TaintEffect_Values(), false), + }, + }, + }, + }, "version": { Type: schema.TypeString, Optional: true, @@ -283,6 +308,10 @@ func resourceAwsEksNodeGroupCreate(d *schema.ResourceData, meta interface{}) err input.Tags = tags.IgnoreAws().EksTags() } + if v, ok := d.GetOk("taint"); ok && v.(*schema.Set).Len() > 0 { + input.Taints = expandEksTaints(v.(*schema.Set).List()) + } + if v, ok := d.GetOk("version"); ok { input.Version = aws.String(v.(string)) } @@ -396,6 +425,10 @@ func resourceAwsEksNodeGroupRead(d *schema.ResourceData, meta interface{}) error return fmt.Errorf("error setting tags_all: %w", err) } + if err := d.Set("taint", flattenEksTaints(nodeGroup.Taints)); err != nil { + return fmt.Errorf("error setting taint: %w", err) + } + d.Set("version", nodeGroup.Version) return nil @@ -410,7 +443,7 @@ func resourceAwsEksNodeGroupUpdate(d *schema.ResourceData, meta interface{}) err return err } - if d.HasChanges("labels", "scaling_config") { + if d.HasChanges("labels", "scaling_config", "taint") { oldLabelsRaw, newLabelsRaw := d.GetChange("labels") input := &eks.UpdateNodegroupConfigInput{ @@ -424,6 +457,9 @@ func resourceAwsEksNodeGroupUpdate(d *schema.ResourceData, meta interface{}) err input.ScalingConfig = expandEksNodegroupScalingConfig(v) } + oldTaintsRaw, newTaintsRaw := d.GetChange("taint") + input.Taints = expandEksUpdateTaintsPayload(oldTaintsRaw.(*schema.Set).List(), newTaintsRaw.(*schema.Set).List()) + output, err := conn.UpdateNodegroupConfig(input) if err != nil { @@ -585,6 +621,108 @@ func expandEksNodegroupScalingConfig(l []interface{}) *eks.NodegroupScalingConfi return config } +func expandEksTaints(l []interface{}) []*eks.Taint { + if len(l) == 0 { + return nil + } + + var taints []*eks.Taint + + for _, raw := range l { + t, ok := raw.(map[string]interface{}) + + if !ok { + continue + } + + taint := &eks.Taint{} + + if k, ok := t["key"].(string); ok { + taint.Key = aws.String(k) + } + + if v, ok := t["value"].(string); ok { + taint.Value = aws.String(v) + } + + if e, ok := t["effect"].(string); ok { + taint.Effect = aws.String(e) + } + + taints = append(taints, taint) + } + + return taints +} + +func expandEksUpdateTaintsPayload(oldTaintsRaw, newTaintsRaw []interface{}) *eks.UpdateTaintsPayload { + oldTaints := expandEksTaints(oldTaintsRaw) + newTaints := expandEksTaints(newTaintsRaw) + + var removedTaints []*eks.Taint + for _, ot := range oldTaints { + if ot == nil { + continue + } + + removed := true + for _, nt := range newTaints { + if nt == nil { + continue + } + + // if both taint.key and taint.effect are the same, we don't need to remove it. + if aws.StringValue(nt.Key) == aws.StringValue(ot.Key) && + aws.StringValue(nt.Effect) == aws.StringValue(ot.Effect) { + removed = false + break + } + } + + if removed { + removedTaints = append(removedTaints, ot) + } + } + + var updatedTaints []*eks.Taint + for _, nt := range newTaints { + if nt == nil { + continue + } + + updated := true + for _, ot := range oldTaints { + if nt == nil { + continue + } + + if reflect.DeepEqual(nt, ot) { + updated = false + break + } + } + if updated { + updatedTaints = append(updatedTaints, nt) + } + } + + if len(removedTaints) == 0 && len(updatedTaints) == 0 { + return nil + } + + updateTaintsPayload := &eks.UpdateTaintsPayload{} + + if len(removedTaints) > 0 { + updateTaintsPayload.RemoveTaints = removedTaints + } + + if len(updatedTaints) > 0 { + updateTaintsPayload.AddOrUpdateTaints = updatedTaints + } + + return updateTaintsPayload +} + func expandEksRemoteAccessConfig(l []interface{}) *eks.RemoteAccessConfig { if len(l) == 0 || l[0] == nil { return nil @@ -710,6 +848,28 @@ func flattenEksRemoteAccessConfig(config *eks.RemoteAccessConfig) []map[string]i return []map[string]interface{}{m} } +func flattenEksTaints(taints []*eks.Taint) []interface{} { + if len(taints) == 0 { + return nil + } + + var results []interface{} + + for _, taint := range taints { + if taint == nil { + continue + } + + t := make(map[string]interface{}) + t["key"] = aws.StringValue(taint.Key) + t["value"] = aws.StringValue(taint.Value) + t["effect"] = aws.StringValue(taint.Effect) + + results = append(results, t) + } + return results +} + func refreshEksNodeGroupStatus(conn *eks.EKS, clusterName string, nodeGroupName string) resource.StateRefreshFunc { return func() (interface{}, string, error) { input := &eks.DescribeNodegroupInput{ diff --git a/aws/resource_aws_eks_node_group_test.go b/aws/resource_aws_eks_node_group_test.go index b381cba9598..55a1ad905c5 100644 --- a/aws/resource_aws_eks_node_group_test.go +++ b/aws/resource_aws_eks_node_group_test.go @@ -126,6 +126,7 @@ func TestAccAWSEksNodeGroup_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "status", eks.NodegroupStatusActive), resource.TestCheckResourceAttr(resourceName, "subnet_ids.#", "2"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "taint.#", "0"), resource.TestCheckResourceAttrPair(resourceName, "version", eksClusterResourceName, "version"), ), }, @@ -818,6 +819,69 @@ func TestAccAWSEksNodeGroup_Tags(t *testing.T) { }) } +func TestAccAWSEksNodeGroup_Taints(t *testing.T) { + var nodeGroup1 eks.Nodegroup + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_eks_node_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSEks(t) }, + ErrorCheck: testAccErrorCheck(t, eks.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSEksNodeGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSEksNodeGroupConfigTaints1(rName, "key1", "value1", "NO_SCHEDULE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksNodeGroupExists(resourceName, &nodeGroup1), + resource.TestCheckResourceAttr(resourceName, "taint.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "taint.*", map[string]string{ + "key": "key1", + "value": "value1", + "effect": "NO_SCHEDULE", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSEksNodeGroupConfigTaints2(rName, + "key1", "value1updated", "NO_EXECUTE", + "key2", "value2", "NO_SCHEDULE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksNodeGroupExists(resourceName, &nodeGroup1), + resource.TestCheckResourceAttr(resourceName, "taint.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "taint.*", map[string]string{ + "key": "key1", + "value": "value1updated", + "effect": "NO_EXECUTE", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "taint.*", map[string]string{ + "key": "key2", + "value": "value2", + "effect": "NO_SCHEDULE", + }), + ), + }, + { + Config: testAccAWSEksNodeGroupConfigTaints1(rName, "key2", "value2", "NO_SCHEDULE"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSEksNodeGroupExists(resourceName, &nodeGroup1), + resource.TestCheckResourceAttr(resourceName, "taint.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "taint.*", map[string]string{ + "key": "key2", + "value": "value2", + "effect": "NO_SCHEDULE", + }), + ), + }, + }, + }) +} + func TestAccAWSEksNodeGroup_Version(t *testing.T) { var nodeGroup1, nodeGroup2 eks.Nodegroup rName := acctest.RandomWithPrefix("tf-acc-test") @@ -1900,6 +1964,70 @@ resource "aws_eks_node_group" "test" { `, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } +func testAccAWSEksNodeGroupConfigTaints1(rName, taintKey1, taintValue1, taintEffect1 string) string { + return composeConfig(testAccAWSEksNodeGroupConfigBase(rName), fmt.Sprintf(` +resource "aws_eks_node_group" "test" { + cluster_name = aws_eks_cluster.test.name + node_group_name = %[1]q + node_role_arn = aws_iam_role.node.arn + subnet_ids = aws_subnet.test[*].id + + taint { + key = %[2]q + value = %[3]q + effect = %[4]q + } + + scaling_config { + desired_size = 1 + max_size = 1 + min_size = 1 + } + + depends_on = [ + aws_iam_role_policy_attachment.node-AmazonEKSWorkerNodePolicy, + aws_iam_role_policy_attachment.node-AmazonEKS_CNI_Policy, + aws_iam_role_policy_attachment.node-AmazonEC2ContainerRegistryReadOnly, + ] +} +`, rName, taintKey1, taintValue1, taintEffect1)) +} + +func testAccAWSEksNodeGroupConfigTaints2(rName, taintKey1, taintValue1, taintEffect1, taintKey2, taintValue2, taintEffect2 string) string { + return composeConfig(testAccAWSEksNodeGroupConfigBase(rName), fmt.Sprintf(` +resource "aws_eks_node_group" "test" { + cluster_name = aws_eks_cluster.test.name + node_group_name = %[1]q + node_role_arn = aws_iam_role.node.arn + subnet_ids = aws_subnet.test[*].id + + taint { + key = %[2]q + value = %[3]q + effect = %[4]q + } + + taint { + key = %[5]q + value = %[6]q + effect = %[7]q + } + + scaling_config { + desired_size = 1 + max_size = 1 + min_size = 1 + } + + depends_on = [ + aws_iam_role_policy_attachment.node-AmazonEKSWorkerNodePolicy, + aws_iam_role_policy_attachment.node-AmazonEKS_CNI_Policy, + aws_iam_role_policy_attachment.node-AmazonEC2ContainerRegistryReadOnly, + ] +} +`, rName, taintKey1, taintValue1, taintEffect1, taintKey2, taintValue2, taintEffect2)) +} + func testAccAWSEksNodeGroupConfigVersion(rName, version string) string { return composeConfig(testAccAWSEksNodeGroupConfigBaseVersion(rName, version), fmt.Sprintf(` resource "aws_eks_node_group" "test" { diff --git a/aws/resource_aws_elasticache_cluster.go b/aws/resource_aws_elasticache_cluster.go index df67db37d2e..2988a3e12f5 100644 --- a/aws/resource_aws_elasticache_cluster.go +++ b/aws/resource_aws_elasticache_cluster.go @@ -268,6 +268,7 @@ func resourceAwsElasticacheCluster() *schema.Resource { CustomizeDiffValidateClusterNumCacheNodes, CustomizeDiffClusterMemcachedNodeType, CustomizeDiffValidateClusterMemcachedSnapshotIdentifier, + SetTagsDiff, ), } } diff --git a/aws/resource_aws_elasticache_parameter_group.go b/aws/resource_aws_elasticache_parameter_group.go index 4cb198f79cd..e3591da50ff 100644 --- a/aws/resource_aws_elasticache_parameter_group.go +++ b/aws/resource_aws_elasticache_parameter_group.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/terraform-providers/terraform-provider-aws/aws/internal/hashcode" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" ) func resourceAwsElasticacheParameterGroup() *schema.Resource { @@ -45,6 +46,10 @@ func resourceAwsElasticacheParameterGroup() *schema.Resource { ForceNew: true, Default: "Managed by Terraform", }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "parameter": { Type: schema.TypeSet, Optional: true, @@ -62,17 +67,23 @@ func resourceAwsElasticacheParameterGroup() *schema.Resource { }, Set: resourceAwsElasticacheParameterHash, }, + "tags": tagsSchema(), + "tags_all": tagsSchemaComputed(), }, + CustomizeDiff: SetTagsDiff, } } func resourceAwsElasticacheParameterGroupCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) createOpts := elasticache.CreateCacheParameterGroupInput{ CacheParameterGroupName: aws.String(d.Get("name").(string)), CacheParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), + Tags: tags.IgnoreAws().ElasticacheTags(), } log.Printf("[DEBUG] Create ElastiCache Parameter Group: %#v", createOpts) @@ -82,6 +93,7 @@ func resourceAwsElasticacheParameterGroupCreate(d *schema.ResourceData, meta int } d.SetId(aws.StringValue(resp.CacheParameterGroup.CacheParameterGroupName)) + d.Set("arn", resp.CacheParameterGroup.ARN) log.Printf("[INFO] ElastiCache Parameter Group ID: %s", d.Id()) return resourceAwsElasticacheParameterGroupUpdate(d, meta) @@ -89,6 +101,8 @@ func resourceAwsElasticacheParameterGroupCreate(d *schema.ResourceData, meta int func resourceAwsElasticacheParameterGroupRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig describeOpts := elasticache.DescribeCacheParameterGroupsInput{ CacheParameterGroupName: aws.String(d.Id()), @@ -107,6 +121,24 @@ func resourceAwsElasticacheParameterGroupRead(d *schema.ResourceData, meta inter d.Set("name", describeResp.CacheParameterGroups[0].CacheParameterGroupName) d.Set("family", describeResp.CacheParameterGroups[0].CacheParameterGroupFamily) d.Set("description", describeResp.CacheParameterGroups[0].Description) + d.Set("arn", describeResp.CacheParameterGroups[0].ARN) + + tags, err := keyvaluetags.ElasticacheListTags(conn, aws.StringValue(describeResp.CacheParameterGroups[0].ARN)) + + if err != nil { + return fmt.Errorf("error listing tags for ElastiCache Parameter Group (%s): %w", d.Id(), err) + } + + tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } // Only include user customized parameters as there's hundreds of system/default ones describeParametersOpts := elasticache.DescribeCacheParametersInput{ @@ -127,6 +159,14 @@ func resourceAwsElasticacheParameterGroupRead(d *schema.ResourceData, meta inter func resourceAwsElasticacheParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).elasticacheconn + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + if err := keyvaluetags.ElasticacheUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating ElastiCache Parameter Group (%s) tags: %w", d.Get("arn").(string), err) + } + } + if d.HasChange("parameter") { o, n := d.GetChange("parameter") toRemove, toAdd := elastiCacheParameterChanges(o, n) diff --git a/aws/resource_aws_elasticache_parameter_group_test.go b/aws/resource_aws_elasticache_parameter_group_test.go index 5c6c520d6ef..4eb7ef93fb2 100644 --- a/aws/resource_aws_elasticache_parameter_group_test.go +++ b/aws/resource_aws_elasticache_parameter_group_test.go @@ -87,6 +87,7 @@ func TestAccAWSElasticacheParameterGroup_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "description", "Managed by Terraform"), resource.TestCheckResourceAttr(resourceName, "family", "redis2.8"), resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { @@ -413,6 +414,50 @@ func TestAccAWSElasticacheParameterGroup_Description(t *testing.T) { }) } +func TestAccAWSElasticacheParameterGroup_Tags(t *testing.T) { + var cacheParameterGroup1 elasticache.CacheParameterGroup + resourceName := "aws_elasticache_parameter_group.test" + rName := fmt.Sprintf("parameter-group-test-terraform-%d", acctest.RandInt()) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, elasticache.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSElasticacheParameterGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSElasticacheParameterGroupConfigTags1(rName, "redis2.8", "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheParameterGroupExists(resourceName, &cacheParameterGroup1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + Config: testAccAWSElasticacheParameterGroupConfigTags2(rName, "redis2.8", "key1", "updatedvalue1", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheParameterGroupExists(resourceName, &cacheParameterGroup1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "updatedvalue1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSElasticacheParameterGroupConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSElasticacheParameterGroupExists(resourceName, &cacheParameterGroup1), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func testAccCheckAWSElasticacheParameterGroupDestroy(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).elasticacheconn @@ -545,6 +590,33 @@ resource "aws_elasticache_parameter_group" "test" { `, family, rName, parameterName1, parameterValue1, parameterName2, parameterValue2) } +func testAccAWSElasticacheParameterGroupConfigTags1(rName, family, tagName1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_elasticache_parameter_group" "test" { + family = %[1]q + name = %[2]q + + tags = { + %[3]s = %[4]q + } +} +`, family, rName, tagName1, tagValue1) +} + +func testAccAWSElasticacheParameterGroupConfigTags2(rName, family, tagName1, tagValue1, tagName2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_elasticache_parameter_group" "test" { + family = %[1]q + name = %[2]q + + tags = { + %[3]s = %[4]q + %[5]s = %[6]q + } +} +`, family, rName, tagName1, tagValue1, tagName2, tagValue2) +} + func TestFlattenElasticacheParameters(t *testing.T) { cases := []struct { Input []*elasticache.Parameter diff --git a/aws/resource_aws_fsx_lustre_file_system.go b/aws/resource_aws_fsx_lustre_file_system.go index 59a2607e1e0..3704bb313ee 100644 --- a/aws/resource_aws_fsx_lustre_file_system.go +++ b/aws/resource_aws_fsx_lustre_file_system.go @@ -1,6 +1,7 @@ package aws import ( + "context" "fmt" "log" "regexp" @@ -8,6 +9,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/fsx" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -89,7 +91,6 @@ func resourceAwsFsxLustreFileSystem() *schema.Resource { "storage_capacity": { Type: schema.TypeInt, Required: true, - ForceNew: true, ValidateFunc: validation.IntAtLeast(1200), }, "subnet_ids": { @@ -183,8 +184,25 @@ func resourceAwsFsxLustreFileSystem() *schema.Resource { }, }, - CustomizeDiff: SetTagsDiff, + CustomizeDiff: customdiff.Sequence( + SetTagsDiff, + resourceFsxLustreFileSystemSchemaCustomizeDiff, + ), + } +} + +func resourceFsxLustreFileSystemSchemaCustomizeDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + // we want to force a new resource if the new storage capacity is less than the old one + if d.HasChange("storage_capacity") { + o, n := d.GetChange("storage_capacity") + if n.(int) < o.(int) || d.Get("deployment_type").(string) == fsx.LustreDeploymentTypeScratch1 { + if err := d.ForceNew("storage_capacity"); err != nil { + return err + } + } } + + return nil } func resourceAwsFsxLustreFileSystemCreate(d *schema.ResourceData, meta interface{}) error { @@ -310,6 +328,11 @@ func resourceAwsFsxLustreFileSystemUpdate(d *schema.ResourceData, meta interface requestUpdate = true } + if d.HasChange("storage_capacity") { + input.StorageCapacity = aws.Int64(int64(d.Get("storage_capacity").(int))) + requestUpdate = true + } + if requestUpdate { _, err := conn.UpdateFileSystem(input) if err != nil { diff --git a/aws/resource_aws_fsx_lustre_file_system_test.go b/aws/resource_aws_fsx_lustre_file_system_test.go index 7f6c52d88e6..119ff6a9af1 100644 --- a/aws/resource_aws_fsx_lustre_file_system_test.go +++ b/aws/resource_aws_fsx_lustre_file_system_test.go @@ -323,6 +323,49 @@ func TestAccAWSFsxLustreFileSystem_StorageCapacity(t *testing.T) { }) } +func TestAccAWSFsxLustreFileSystem_StorageCapacityUpdate(t *testing.T) { + var filesystem1, filesystem2, filesystem3 fsx.FileSystem + resourceName := "aws_fsx_lustre_file_system.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(fsx.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, fsx.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckFsxLustreFileSystemDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsFsxLustreFileSystemConfigStorageCapacityScratch2(7200), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxLustreFileSystemExists(resourceName, &filesystem1), + resource.TestCheckResourceAttr(resourceName, "storage_capacity", "7200"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"security_group_ids"}, + }, + { + Config: testAccAwsFsxLustreFileSystemConfigStorageCapacityScratch2(1200), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxLustreFileSystemExists(resourceName, &filesystem2), + testAccCheckFsxLustreFileSystemRecreated(&filesystem1, &filesystem2), + resource.TestCheckResourceAttr(resourceName, "storage_capacity", "1200"), + ), + }, + { + Config: testAccAwsFsxLustreFileSystemConfigStorageCapacityScratch2(7200), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxLustreFileSystemExists(resourceName, &filesystem3), + testAccCheckFsxLustreFileSystemNotRecreated(&filesystem2, &filesystem3), + resource.TestCheckResourceAttr(resourceName, "storage_capacity", "7200"), + ), + }, + }, + }) +} + func TestAccAWSFsxLustreFileSystem_Tags(t *testing.T) { var filesystem1, filesystem2, filesystem3 fsx.FileSystem resourceName := "aws_fsx_lustre_file_system.test" @@ -926,6 +969,16 @@ resource "aws_fsx_lustre_file_system" "test" { `, storageCapacity)) } +func testAccAwsFsxLustreFileSystemConfigStorageCapacityScratch2(storageCapacity int) string { + return composeConfig(testAccAwsFsxLustreFileSystemConfigBase(), fmt.Sprintf(` +resource "aws_fsx_lustre_file_system" "test" { + storage_capacity = %[1]d + subnet_ids = [aws_subnet.test1.id] + deployment_type = "SCRATCH_2" +} +`, storageCapacity)) +} + func testAccAwsFsxLustreFileSystemConfigSubnetIds1() string { return composeConfig(testAccAwsFsxLustreFileSystemConfigBase(), ` resource "aws_fsx_lustre_file_system" "test" { diff --git a/aws/resource_aws_glue_connection.go b/aws/resource_aws_glue_connection.go index 229fd7d375d..5fccb888fb0 100644 --- a/aws/resource_aws_glue_connection.go +++ b/aws/resource_aws_glue_connection.go @@ -35,7 +35,7 @@ func resourceAwsGlueConnection() *schema.Resource { }, "connection_properties": { Type: schema.TypeMap, - Required: true, + Optional: true, Sensitive: true, ValidateFunc: MapKeyInSlice(glue.ConnectionPropertyKey_Values(), false), Elem: &schema.Schema{Type: schema.TypeString}, @@ -243,8 +243,10 @@ func deleteGlueConnection(conn *glue.Glue, catalogID, connectionName string) err func expandGlueConnectionInput(d *schema.ResourceData) *glue.ConnectionInput { connectionProperties := make(map[string]string) - for k, v := range d.Get("connection_properties").(map[string]interface{}) { - connectionProperties[k] = v.(string) + if val, ok := d.GetOkExists("connection_properties"); ok { + for k, v := range val.(map[string]interface{}) { + connectionProperties[k] = v.(string) + } } connectionInput := &glue.ConnectionInput{ diff --git a/aws/resource_aws_glue_connection_test.go b/aws/resource_aws_glue_connection_test.go index 916a63a7980..8c25778feac 100644 --- a/aws/resource_aws_glue_connection_test.go +++ b/aws/resource_aws_glue_connection_test.go @@ -157,6 +157,40 @@ func TestAccAWSGlueConnection_Kafka(t *testing.T) { }) } +func TestAccAWSGlueConnection_Network(t *testing.T) { + var connection glue.Connection + + rName := fmt.Sprintf("tf-acc-test-%s", acctest.RandString(5)) + resourceName := "aws_glue_connection.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, glue.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSGlueConnectionDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSGlueConnectionConfig_Network(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSGlueConnectionExists(resourceName, &connection), + resource.TestCheckResourceAttr(resourceName, "connection_properties.%", "0"), + resource.TestCheckResourceAttr(resourceName, "connection_type", "NETWORK"), + resource.TestCheckResourceAttr(resourceName, "match_criteria.#", "0"), + resource.TestCheckResourceAttr(resourceName, "physical_connection_requirements.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "physical_connection_requirements.0.availability_zone"), + resource.TestCheckResourceAttr(resourceName, "physical_connection_requirements.0.security_group_id_list.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "physical_connection_requirements.0.subnet_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccAWSGlueConnection_Description(t *testing.T) { var connection glue.Connection @@ -564,3 +598,57 @@ resource "aws_glue_connection" "test" { } `, rName) } + +func testAccAWSGlueConnectionConfig_Network(rName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" { + state = "available" + + filter { + name = "opt-in-status" + values = ["opt-in-not-required"] + } +} + +resource "aws_vpc" "test" { + cidr_block = "10.0.0.0/16" + + tags = { + Name = "terraform-testacc-glue-connection-network" + } +} + +resource "aws_subnet" "test" { + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "10.0.0.0/24" + vpc_id = aws_vpc.test.id + + tags = { + Name = "terraform-testacc-glue-connection-network" + } +} + +resource "aws_security_group" "test" { + name = "%[1]s" + vpc_id = aws_vpc.test.id + + ingress { + protocol = "tcp" + self = true + from_port = 1 + to_port = 65535 + } +} + +resource "aws_glue_connection" "test" { + connection_type = "NETWORK" + name = "%[1]s" + + physical_connection_requirements { + availability_zone = aws_subnet.test.availability_zone + security_group_id_list = [aws_security_group.test.id] + subnet_id = aws_subnet.test.id + } +} +`, rName) +} diff --git a/aws/resource_aws_iam_access_key.go b/aws/resource_aws_iam_access_key.go index d8bcd7d5054..0b396760eb5 100644 --- a/aws/resource_aws_iam_access_key.go +++ b/aws/resource_aws_iam_access_key.go @@ -58,7 +58,7 @@ func resourceAwsIamAccessKey() *schema.Resource { "status": { Type: schema.TypeString, Optional: true, - Computed: true, + Default: "Active", ValidateFunc: validation.StringInSlice([]string{ iam.StatusTypeActive, iam.StatusTypeInactive, diff --git a/aws/resource_aws_instance_test.go b/aws/resource_aws_instance_test.go index c8b342f4e26..f5cc1067d4e 100644 --- a/aws/resource_aws_instance_test.go +++ b/aws/resource_aws_instance_test.go @@ -38,6 +38,7 @@ func init() { func testAccErrorCheckSkipEC2(t *testing.T) resource.ErrorCheckFunc { return testAccErrorCheckSkipMessagesContaining(t, "VolumeTypeNotAvailableInRegion", + "Invalid value specified for Phase", ) } diff --git a/aws/resource_aws_lambda_event_source_mapping.go b/aws/resource_aws_lambda_event_source_mapping.go index c32af92e210..dc96af1f1be 100644 --- a/aws/resource_aws_lambda_event_source_mapping.go +++ b/aws/resource_aws_lambda_event_source_mapping.go @@ -3,14 +3,14 @@ package aws import ( "fmt" "log" + "reflect" + "sort" + "strings" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/arn" - "github.com/aws/aws-sdk-go/service/dynamodb" - "github.com/aws/aws-sdk-go/service/kinesis" "github.com/aws/aws-sdk-go/service/lambda" - "github.com/aws/aws-sdk-go/service/sqs" "github.com/hashicorp/aws-sdk-go-base/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -32,41 +32,6 @@ func resourceAwsLambdaEventSourceMapping() *schema.Resource { }, Schema: map[string]*schema.Schema{ - "event_source_arn": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - }, - "function_name": { - Type: schema.TypeString, - Required: true, - DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { - // Using function name or ARN should not be shown as a diff. - // Try to convert the old and new values from ARN to function name - oldFunctionName, oldFunctionNameErr := getFunctionNameFromLambdaArn(old) - newFunctionName, newFunctionNameErr := getFunctionNameFromLambdaArn(new) - return (oldFunctionName == new && oldFunctionNameErr == nil) || (newFunctionName == old && newFunctionNameErr == nil) - }, - }, - "starting_position": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(lambda.EventSourcePosition_Values(), false), - }, - "starting_position_timestamp": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validation.IsRFC3339Time, - }, - "topics": { - Type: schema.TypeSet, - Optional: true, - ForceNew: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, - }, "batch_size": { Type: schema.TypeInt, Optional: true, @@ -80,62 +45,37 @@ func resourceAwsLambdaEventSourceMapping() *schema.Resource { return false } - eventSourceARN, err := arn.Parse(d.Get("event_source_arn").(string)) - if err != nil { - return false - } - switch eventSourceARN.Service { - // kafka.ServiceName is "Kafka". - case dynamodb.ServiceName, kinesis.ServiceName, "kafka": - if old == "100" { - return true - } - case sqs.ServiceName: - if old == "10" { - return true + var serviceName string + if v, ok := d.GetOk("event_source_arn"); ok { + eventSourceARN, err := arn.Parse(v.(string)) + if err != nil { + return false } + + serviceName = eventSourceARN.Service + } else if _, ok := d.GetOk("self_managed_event_source"); ok { + serviceName = "kafka" + } + + switch serviceName { + case "dynamodb", "kinesis", "kafka", "mq": + return old == "100" + case "sqs": + return old == "10" } - return false + + return old == new }, }, - "enabled": { - Type: schema.TypeBool, - Optional: true, - Default: true, - }, - "maximum_batching_window_in_seconds": { - Type: schema.TypeInt, - Optional: true, - }, - "parallelization_factor": { - Type: schema.TypeInt, - Optional: true, - ValidateFunc: validation.IntBetween(1, 10), - Computed: true, - }, - "maximum_retry_attempts": { - Type: schema.TypeInt, - Optional: true, - Computed: true, - ValidateFunc: validation.IntBetween(-1, 10_000), - }, - "maximum_record_age_in_seconds": { - Type: schema.TypeInt, - Optional: true, - Computed: true, - ValidateFunc: validation.Any( - validation.IntInSlice([]int{-1}), - validation.IntBetween(60, 604_800), - ), - }, + "bisect_batch_on_function_error": { Type: schema.TypeBool, Optional: true, }, + "destination_config": { Type: schema.TypeList, Optional: true, - MinItems: 1, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -155,27 +95,187 @@ func resourceAwsLambdaEventSourceMapping() *schema.Resource { }, }, }, + DiffSuppressFunc: suppressMissingOptionalConfigurationBlock, }, + + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + + "event_source_arn": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ExactlyOneOf: []string{"event_source_arn", "self_managed_event_source"}, + }, + "function_arn": { Type: schema.TypeString, Computed: true, }, + + "function_name": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // Using function name or ARN should not be shown as a diff. + // Try to convert the old and new values from ARN to function name + oldFunctionName, oldFunctionNameErr := getFunctionNameFromLambdaArn(old) + newFunctionName, newFunctionNameErr := getFunctionNameFromLambdaArn(new) + return (oldFunctionName == new && oldFunctionNameErr == nil) || (newFunctionName == old && newFunctionNameErr == nil) + }, + }, + + "function_response_types": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(lambda.FunctionResponseType_Values(), false), + }, + }, + "last_modified": { Type: schema.TypeString, Computed: true, }, + "last_processing_result": { Type: schema.TypeString, Computed: true, }, + + "maximum_batching_window_in_seconds": { + Type: schema.TypeInt, + Optional: true, + }, + + "maximum_record_age_in_seconds": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.Any( + validation.IntInSlice([]int{-1}), + validation.IntBetween(60, 604_800), + ), + }, + + "maximum_retry_attempts": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntBetween(-1, 10_000), + }, + + "parallelization_factor": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 10), + Computed: true, + }, + + "queues": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 1000), + }, + }, + + "self_managed_event_source": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "endpoints": { + Type: schema.TypeMap, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if k == "self_managed_event_source.0.endpoints.KAFKA_BOOTSTRAP_SERVERS" { + // AWS returns the bootstrap brokers in sorted order. + olds := strings.Split(old, ",") + sort.Strings(olds) + news := strings.Split(new, ",") + sort.Strings(news) + + return reflect.DeepEqual(olds, news) + } + + return old == new + }, + }, + }, + }, + ExactlyOneOf: []string{"event_source_arn", "self_managed_event_source"}, + }, + + "source_access_configuration": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 22, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(lambda.SourceAccessType_Values(), false), + }, + "uri": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + + "starting_position": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(lambda.EventSourcePosition_Values(), false), + }, + + "starting_position_timestamp": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.IsRFC3339Time, + }, + "state": { Type: schema.TypeString, Computed: true, }, + "state_transition_reason": { Type: schema.TypeString, Computed: true, }, + + "topics": { + Type: schema.TypeSet, + Optional: true, + ForceNew: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringLenBetween(1, 249), + }, + }, + + "tumbling_window_in_seconds": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 900), + }, + "uuid": { Type: schema.TypeString, Computed: true, @@ -184,16 +284,17 @@ func resourceAwsLambdaEventSourceMapping() *schema.Resource { } } -// resourceAwsLambdaEventSourceMappingCreate maps to: -// CreateEventSourceMapping in the API / SDK func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).lambdaconn + functionName := d.Get("function_name").(string) input := &lambda.CreateEventSourceMappingInput{ Enabled: aws.Bool(d.Get("enabled").(bool)), - FunctionName: aws.String(d.Get("function_name").(string)), + FunctionName: aws.String(functionName), } + var target string + if v, ok := d.GetOk("batch_size"); ok { input.BatchSize = aws.Int64(int64(v.(int))) } @@ -202,12 +303,19 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte input.BisectBatchOnFunctionError = aws.Bool(v.(bool)) } - if vDest, ok := d.GetOk("destination_config"); ok { - input.DestinationConfig = expandLambdaEventSourceMappingDestinationConfig(vDest.([]interface{})) + if v, ok := d.GetOk("destination_config"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.DestinationConfig = expandLambdaDestinationConfig(v.([]interface{})[0].(map[string]interface{})) } if v, ok := d.GetOk("event_source_arn"); ok { - input.EventSourceArn = aws.String(v.(string)) + v := v.(string) + + input.EventSourceArn = aws.String(v) + target = v + } + + if v, ok := d.GetOk("function_response_types"); ok && v.(*schema.Set).Len() > 0 { + input.FunctionResponseTypes = expandStringSet(v.(*schema.Set)) } if v, ok := d.GetOk("maximum_batching_window_in_seconds"); ok { @@ -226,6 +334,20 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte input.ParallelizationFactor = aws.Int64(int64(v.(int))) } + if v, ok := d.GetOk("queues"); ok && v.(*schema.Set).Len() > 0 { + input.Queues = expandStringSet(v.(*schema.Set)) + } + + if v, ok := d.GetOk("self_managed_event_source"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.SelfManagedEventSource = expandLambdaSelfManagedEventSource(v.([]interface{})[0].(map[string]interface{})) + + target = "Self-Managed Apache Kafka" + } + + if v, ok := d.GetOk("source_access_configuration"); ok && v.(*schema.Set).Len() > 0 { + input.SourceAccessConfigurations = expandLambdaSourceAccessConfigurations(v.(*schema.Set).List()) + } + if v, ok := d.GetOk("starting_position"); ok { input.StartingPosition = aws.String(v.(string)) } @@ -240,8 +362,9 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte input.Topics = expandStringSet(v.(*schema.Set)) } - // When non-ARN targets are supported, set target to the non-nil value. - target := input.EventSourceArn + if v, ok := d.GetOk("tumbling_window_in_seconds"); ok { + input.TumblingWindowInSeconds = aws.Int64(int64(v.(int))) + } log.Printf("[DEBUG] Creating Lambda Event Source Mapping: %s", input) @@ -257,7 +380,7 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte err = resource.Retry(iamwaiter.PropagationTimeout, func() *resource.RetryError { eventSourceMappingConfiguration, err = conn.CreateEventSourceMapping(input) - if tfawserr.ErrCodeEquals(err, lambda.ErrCodeInvalidParameterValueException) { + if tfawserr.ErrMessageContains(err, lambda.ErrCodeInvalidParameterValueException, "cannot be assumed by Lambda") { return resource.RetryableError(err) } @@ -273,7 +396,7 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte } if err != nil { - return fmt.Errorf("error creating Lambda Event Source Mapping (%s): %w", aws.StringValue(target), err) + return fmt.Errorf("error creating Lambda Event Source Mapping (%s): %w", target, err) } d.SetId(aws.StringValue(eventSourceMappingConfiguration.UUID)) @@ -285,8 +408,6 @@ func resourceAwsLambdaEventSourceMappingCreate(d *schema.ResourceData, meta inte return resourceAwsLambdaEventSourceMappingRead(d, meta) } -// resourceAwsLambdaEventSourceMappingRead maps to: -// GetEventSourceMapping in the API / SDK func resourceAwsLambdaEventSourceMappingRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).lambdaconn @@ -303,49 +424,154 @@ func resourceAwsLambdaEventSourceMappingRead(d *schema.ResourceData, meta interf } d.Set("batch_size", eventSourceMappingConfiguration.BatchSize) - d.Set("maximum_batching_window_in_seconds", eventSourceMappingConfiguration.MaximumBatchingWindowInSeconds) + d.Set("bisect_batch_on_function_error", eventSourceMappingConfiguration.BisectBatchOnFunctionError) + if eventSourceMappingConfiguration.DestinationConfig != nil { + if err := d.Set("destination_config", []interface{}{flattenLambdaDestinationConfig(eventSourceMappingConfiguration.DestinationConfig)}); err != nil { + return fmt.Errorf("error setting destination_config: %w", err) + } + } else { + d.Set("destination_config", nil) + } d.Set("event_source_arn", eventSourceMappingConfiguration.EventSourceArn) d.Set("function_arn", eventSourceMappingConfiguration.FunctionArn) - d.Set("last_modified", aws.TimeValue(eventSourceMappingConfiguration.LastModified).Format(time.RFC3339)) - d.Set("last_processing_result", eventSourceMappingConfiguration.LastProcessingResult) - d.Set("state", eventSourceMappingConfiguration.State) - d.Set("state_transition_reason", eventSourceMappingConfiguration.StateTransitionReason) - d.Set("uuid", eventSourceMappingConfiguration.UUID) d.Set("function_name", eventSourceMappingConfiguration.FunctionArn) - d.Set("parallelization_factor", eventSourceMappingConfiguration.ParallelizationFactor) - d.Set("maximum_retry_attempts", eventSourceMappingConfiguration.MaximumRetryAttempts) + d.Set("function_response_types", aws.StringValueSlice(eventSourceMappingConfiguration.FunctionResponseTypes)) + if eventSourceMappingConfiguration.LastModified != nil { + d.Set("last_modified", aws.TimeValue(eventSourceMappingConfiguration.LastModified).Format(time.RFC3339)) + } else { + d.Set("last_modified", nil) + } + d.Set("last_processing_result", eventSourceMappingConfiguration.LastProcessingResult) + d.Set("maximum_batching_window_in_seconds", eventSourceMappingConfiguration.MaximumBatchingWindowInSeconds) d.Set("maximum_record_age_in_seconds", eventSourceMappingConfiguration.MaximumRecordAgeInSeconds) - d.Set("bisect_batch_on_function_error", eventSourceMappingConfiguration.BisectBatchOnFunctionError) - if err := d.Set("destination_config", flattenLambdaEventSourceMappingDestinationConfig(eventSourceMappingConfiguration.DestinationConfig)); err != nil { - return fmt.Errorf("error setting destination_config: %w", err) + d.Set("maximum_retry_attempts", eventSourceMappingConfiguration.MaximumRetryAttempts) + d.Set("parallelization_factor", eventSourceMappingConfiguration.ParallelizationFactor) + d.Set("queues", aws.StringValueSlice(eventSourceMappingConfiguration.Queues)) + if eventSourceMappingConfiguration.SelfManagedEventSource != nil { + if err := d.Set("self_managed_event_source", []interface{}{flattenLambdaSelfManagedEventSource(eventSourceMappingConfiguration.SelfManagedEventSource)}); err != nil { + return fmt.Errorf("error setting self_managed_event_source: %w", err) + } + } else { + d.Set("self_managed_event_source", nil) } - if err := d.Set("topics", flattenStringSet(eventSourceMappingConfiguration.Topics)); err != nil { - return fmt.Errorf("error setting topics: %w", err) + if err := d.Set("source_access_configuration", flattenLambdaSourceAccessConfigurations(eventSourceMappingConfiguration.SourceAccessConfigurations)); err != nil { + return fmt.Errorf("error setting source_access_configuration: %w", err) } - d.Set("starting_position", eventSourceMappingConfiguration.StartingPosition) if eventSourceMappingConfiguration.StartingPositionTimestamp != nil { d.Set("starting_position_timestamp", aws.TimeValue(eventSourceMappingConfiguration.StartingPositionTimestamp).Format(time.RFC3339)) } else { d.Set("starting_position_timestamp", nil) } + d.Set("state", eventSourceMappingConfiguration.State) + d.Set("state_transition_reason", eventSourceMappingConfiguration.StateTransitionReason) + d.Set("topics", aws.StringValueSlice(eventSourceMappingConfiguration.Topics)) + d.Set("tumbling_window_in_seconds", eventSourceMappingConfiguration.TumblingWindowInSeconds) + d.Set("uuid", eventSourceMappingConfiguration.UUID) - state := aws.StringValue(eventSourceMappingConfiguration.State) - - switch state { + switch state := d.Get("state").(string); state { case waiter.EventSourceMappingStateEnabled, waiter.EventSourceMappingStateEnabling: d.Set("enabled", true) case waiter.EventSourceMappingStateDisabled, waiter.EventSourceMappingStateDisabling: d.Set("enabled", false) default: - log.Printf("[WARN] Lambda Event Source Mapping is neither enabled nor disabled but %s", state) + log.Printf("[WARN] Lambda Event Source Mapping (%s) is neither enabled nor disabled, but %s", d.Id(), state) + d.Set("enabled", nil) } return nil } -// resourceAwsLambdaEventSourceMappingDelete maps to: -// DeleteEventSourceMapping in the API / SDK +func resourceAwsLambdaEventSourceMappingUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).lambdaconn + + log.Printf("[DEBUG] Updating Lambda Event Source Mapping: %s", d.Id()) + + input := &lambda.UpdateEventSourceMappingInput{ + UUID: aws.String(d.Id()), + } + + if d.HasChange("batch_size") { + input.BatchSize = aws.Int64(int64(d.Get("batch_size").(int))) + } + + if d.HasChange("bisect_batch_on_function_error") { + input.BisectBatchOnFunctionError = aws.Bool(d.Get("bisect_batch_on_function_error").(bool)) + } + + if d.HasChange("destination_config") { + if v, ok := d.GetOk("destination_config"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.DestinationConfig = expandLambdaDestinationConfig(v.([]interface{})[0].(map[string]interface{})) + } + } + + if d.HasChange("enabled") { + input.Enabled = aws.Bool(d.Get("enabled").(bool)) + } + + if d.HasChange("function_name") { + input.FunctionName = aws.String(d.Get("function_name").(string)) + } + + if d.HasChange("function_response_types") { + input.FunctionResponseTypes = expandStringSet(d.Get("function_response_types").(*schema.Set)) + } + + if d.HasChange("maximum_batching_window_in_seconds") { + input.MaximumBatchingWindowInSeconds = aws.Int64(int64(d.Get("maximum_batching_window_in_seconds").(int))) + } + + if d.HasChange("maximum_record_age_in_seconds") { + input.MaximumRecordAgeInSeconds = aws.Int64(int64(d.Get("maximum_record_age_in_seconds").(int))) + } + + if d.HasChange("maximum_retry_attempts") { + input.MaximumRetryAttempts = aws.Int64(int64(d.Get("maximum_retry_attempts").(int))) + } + + if d.HasChange("parallelization_factor") { + input.ParallelizationFactor = aws.Int64(int64(d.Get("parallelization_factor").(int))) + } + + if d.HasChange("source_access_configuration") { + if v, ok := d.GetOk("source_access_configuration"); ok && v.(*schema.Set).Len() > 0 { + input.SourceAccessConfigurations = expandLambdaSourceAccessConfigurations(v.(*schema.Set).List()) + } + } + + if d.HasChange("tumbling_window_in_seconds") { + input.TumblingWindowInSeconds = aws.Int64(int64(d.Get("tumbling_window_in_seconds").(int))) + } + + err := resource.Retry(waiter.EventSourceMappingPropagationTimeout, func() *resource.RetryError { + _, err := conn.UpdateEventSourceMapping(input) + + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceInUseException) { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + + if tfresource.TimedOut(err) { + _, err = conn.UpdateEventSourceMapping(input) + } + + if err != nil { + return fmt.Errorf("error updating Lambda Event Source Mapping (%s): %w", d.Id(), err) + } + + if _, err := waiter.EventSourceMappingUpdate(conn, d.Id()); err != nil { + return fmt.Errorf("error waiting for Lambda Event Source Mapping (%s) to update: %w", d.Id(), err) + } + + return resourceAwsLambdaEventSourceMappingRead(d, meta) +} + func resourceAwsLambdaEventSourceMappingDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*AWSClient).lambdaconn @@ -358,10 +584,6 @@ func resourceAwsLambdaEventSourceMappingDelete(d *schema.ResourceData, meta inte err := resource.Retry(waiter.EventSourceMappingPropagationTimeout, func() *resource.RetryError { _, err := conn.DeleteEventSourceMapping(input) - if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceNotFoundException) { - return nil - } - if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceInUseException) { return resource.RetryableError(err) } @@ -392,82 +614,178 @@ func resourceAwsLambdaEventSourceMappingDelete(d *schema.ResourceData, meta inte return nil } -// resourceAwsLambdaEventSourceMappingUpdate maps to: -// UpdateEventSourceMapping in the API / SDK -func resourceAwsLambdaEventSourceMappingUpdate(d *schema.ResourceData, meta interface{}) error { - conn := meta.(*AWSClient).lambdaconn +func expandLambdaDestinationConfig(tfMap map[string]interface{}) *lambda.DestinationConfig { + if tfMap == nil { + return nil + } - log.Printf("[DEBUG] Updating Lambda Event Source Mapping: %s", d.Id()) + apiObject := &lambda.DestinationConfig{} - input := &lambda.UpdateEventSourceMappingInput{ - UUID: aws.String(d.Id()), + if v, ok := tfMap["on_failure"].([]interface{}); ok && len(v) > 0 { + apiObject.OnFailure = expandLambdaOnFailure(v[0].(map[string]interface{})) } - if d.HasChange("batch_size") { - input.BatchSize = aws.Int64(int64(d.Get("batch_size").(int))) + return apiObject +} + +func expandLambdaOnFailure(tfMap map[string]interface{}) *lambda.OnFailure { + if tfMap == nil { + return nil } - if d.HasChange("bisect_batch_on_function_error") { - input.BisectBatchOnFunctionError = aws.Bool(d.Get("bisect_batch_on_function_error").(bool)) + apiObject := &lambda.OnFailure{} + + if v, ok := tfMap["destination_arn"].(string); ok { + apiObject.Destination = aws.String(v) } - if d.HasChange("destination_config") { - input.DestinationConfig = expandLambdaEventSourceMappingDestinationConfig(d.Get("destination_config").([]interface{})) + return apiObject +} + +func flattenLambdaDestinationConfig(apiObject *lambda.DestinationConfig) map[string]interface{} { + if apiObject == nil { + return nil } - if d.HasChange("enabled") { - input.Enabled = aws.Bool(d.Get("enabled").(bool)) + tfMap := map[string]interface{}{} + + if v := apiObject.OnFailure; v != nil { + tfMap["on_failure"] = []interface{}{flattenLambdaOnFailure(v)} } - if d.HasChange("function_name") { - input.FunctionName = aws.String(d.Get("function_name").(string)) + return tfMap +} + +func flattenLambdaOnFailure(apiObject *lambda.OnFailure) map[string]interface{} { + if apiObject == nil { + return nil } - if d.HasChange("maximum_batching_window_in_seconds") { - input.MaximumBatchingWindowInSeconds = aws.Int64(int64(d.Get("maximum_batching_window_in_seconds").(int))) + tfMap := map[string]interface{}{} + + if v := apiObject.Destination; v != nil { + tfMap["destination_arn"] = aws.StringValue(v) } - if d.HasChange("maximum_record_age_in_seconds") { - input.MaximumRecordAgeInSeconds = aws.Int64(int64(d.Get("maximum_record_age_in_seconds").(int))) + return tfMap +} + +func expandLambdaSelfManagedEventSource(tfMap map[string]interface{}) *lambda.SelfManagedEventSource { + if tfMap == nil { + return nil } - if d.HasChange("maximum_retry_attempts") { - input.MaximumRetryAttempts = aws.Int64(int64(d.Get("maximum_retry_attempts").(int))) + apiObject := &lambda.SelfManagedEventSource{} + + if v, ok := tfMap["endpoints"].(map[string]interface{}); ok && len(v) > 0 { + m := map[string][]*string{} + + for k, v := range v { + m[k] = aws.StringSlice(strings.Split(v.(string), ",")) + } + + apiObject.Endpoints = m } - if d.HasChange("parallelization_factor") { - input.ParallelizationFactor = aws.Int64(int64(d.Get("parallelization_factor").(int))) + return apiObject +} + +func flattenLambdaSelfManagedEventSource(apiObject *lambda.SelfManagedEventSource) map[string]interface{} { + if apiObject == nil { + return nil } - err := resource.Retry(waiter.EventSourceMappingPropagationTimeout, func() *resource.RetryError { - _, err := conn.UpdateEventSourceMapping(input) + tfMap := map[string]interface{}{} - if tfawserr.ErrCodeEquals(err, lambda.ErrCodeInvalidParameterValueException) { - return resource.RetryableError(err) + if v := apiObject.Endpoints; v != nil { + m := map[string]string{} + + for k, v := range v { + m[k] = strings.Join(aws.StringValueSlice(v), ",") } - if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceInUseException) { - return resource.RetryableError(err) + tfMap["endpoints"] = m + } + + return tfMap +} + +func expandLambdaSourceAccessConfiguration(tfMap map[string]interface{}) *lambda.SourceAccessConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &lambda.SourceAccessConfiguration{} + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = aws.String(v) + } + + if v, ok := tfMap["uri"].(string); ok && v != "" { + apiObject.URI = aws.String(v) + } + + return apiObject +} + +func expandLambdaSourceAccessConfigurations(tfList []interface{}) []*lambda.SourceAccessConfiguration { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*lambda.SourceAccessConfiguration + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue } - if err != nil { - return resource.NonRetryableError(err) + apiObject := expandLambdaSourceAccessConfiguration(tfMap) + + if apiObject == nil { + continue } + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func flattenLambdaSourceAccessConfiguration(apiObject *lambda.SourceAccessConfiguration) map[string]interface{} { + if apiObject == nil { return nil - }) + } - if tfresource.TimedOut(err) { - _, err = conn.UpdateEventSourceMapping(input) + tfMap := map[string]interface{}{} + + if v := apiObject.Type; v != nil { + tfMap["type"] = aws.StringValue(v) } - if err != nil { - return fmt.Errorf("error updating Lambda Event Source Mapping (%s): %w", d.Id(), err) + if v := apiObject.URI; v != nil { + tfMap["uri"] = aws.StringValue(v) } - if _, err := waiter.EventSourceMappingUpdate(conn, d.Id()); err != nil { - return fmt.Errorf("error waiting for Lambda Event Source Mapping (%s) to update: %w", d.Id(), err) + return tfMap +} + +func flattenLambdaSourceAccessConfigurations(apiObjects []*lambda.SourceAccessConfiguration) []interface{} { + if len(apiObjects) == 0 { + return nil } - return resourceAwsLambdaEventSourceMappingRead(d, meta) + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenLambdaSourceAccessConfiguration(apiObject)) + } + + return tfList } diff --git a/aws/resource_aws_lambda_event_source_mapping_test.go b/aws/resource_aws_lambda_event_source_mapping_test.go index f3113da2e39..f5d7c9fe127 100644 --- a/aws/resource_aws_lambda_event_source_mapping_test.go +++ b/aws/resource_aws_lambda_event_source_mapping_test.go @@ -38,7 +38,9 @@ func TestAccAWSLambdaEventSourceMapping_Kinesis_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "event_source_arn", eventSourceResourceName, "arn"), resource.TestCheckResourceAttrPair(resourceName, "function_arn", functionResourceName, "arn"), resource.TestCheckResourceAttrPair(resourceName, "function_name", functionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "function_response_types.#", "0"), testAccCheckResourceAttrRfc3339(resourceName, "last_modified"), + resource.TestCheckResourceAttr(resourceName, "tumbling_window_in_seconds", "0"), ), }, // batch_size became optional. Ensure that if the user supplies the default @@ -144,7 +146,9 @@ func TestAccAWSLambdaEventSourceMapping_DynamoDB_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "event_source_arn", eventSourceResourceName, "stream_arn"), resource.TestCheckResourceAttrPair(resourceName, "function_arn", functionResourceName, "arn"), resource.TestCheckResourceAttrPair(resourceName, "function_name", functionResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "function_response_types.#", "0"), testAccCheckResourceAttrRfc3339(resourceName, "last_modified"), + resource.TestCheckResourceAttr(resourceName, "tumbling_window_in_seconds", "0"), ), }, // batch_size became optional. Ensure that if the user supplies the default @@ -163,6 +167,41 @@ func TestAccAWSLambdaEventSourceMapping_DynamoDB_basic(t *testing.T) { }) } +func TestAccAWSLambdaEventSourceMapping_DynamoDB_FunctionResponseTypes(t *testing.T) { + var conf lambda.EventSourceMappingConfiguration + resourceName := "aws_lambda_event_source_mapping.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaEventSourceMappingDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaEventSourceMappingConfigDynamoDbFunctionResponseTypes(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "function_response_types.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "function_response_types.*", "ReportBatchItemFailures"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSLambdaEventSourceMappingConfigDynamoDbNoFunctionResponseTypes(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "function_response_types.#", "0"), + ), + }, + }, + }) +} + func TestAccAWSLambdaEventSourceMapping_SQS_BatchWindow(t *testing.T) { var conf lambda.EventSourceMappingConfiguration rName := acctest.RandomWithPrefix("tf-acc-test") @@ -346,6 +385,42 @@ func TestAccAWSLambdaEventSourceMapping_Kinesis_ParallelizationFactor(t *testing }) } +func TestAccAWSLambdaEventSourceMapping_Kinesis_TumblingWindowInSeconds(t *testing.T) { + var conf lambda.EventSourceMappingConfiguration + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_lambda_event_source_mapping.test" + tumblingWindowInSeconds := int64(30) + tumblingWindowInSecondsUpdate := int64(300) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaEventSourceMappingDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaEventSourceMappingConfigKinesisTumblingWindowInSeconds(rName, tumblingWindowInSeconds), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tumbling_window_in_seconds", strconv.Itoa(int(tumblingWindowInSeconds))), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSLambdaEventSourceMappingConfigKinesisTumblingWindowInSeconds(rName, tumblingWindowInSecondsUpdate), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &conf), + resource.TestCheckResourceAttr(resourceName, "tumbling_window_in_seconds", strconv.Itoa(int(tumblingWindowInSecondsUpdate))), + ), + }, + }, + }) +} + func TestAccAWSLambdaEventSourceMapping_Kinesis_MaximumRetryAttempts(t *testing.T) { var conf lambda.EventSourceMappingConfiguration rName := acctest.RandomWithPrefix("tf-acc-test") @@ -622,8 +697,8 @@ func TestAccAWSLambdaEventSourceMapping_MSK(t *testing.T) { rName := acctest.RandomWithPrefix("tf-acc-test") resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { testAccPreCheck(t) }, - ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID, "kafka"), //using kafka.EndpointsID will import kafka and make linters sad + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSMsk(t) }, + ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID, "kafka"), Providers: testAccProviders, CheckDestroy: testAccCheckLambdaEventSourceMappingDestroy, Steps: []resource.TestStep{ @@ -665,6 +740,81 @@ func TestAccAWSLambdaEventSourceMapping_MSK(t *testing.T) { }) } +func TestAccAWSLambdaEventSourceMapping_SelfManagedKafka(t *testing.T) { + var v lambda.EventSourceMappingConfiguration + resourceName := "aws_lambda_event_source_mapping.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaEventSourceMappingDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaEventSourceMappingConfigSelfManagedKafka(rName, "100", "test1:9092,test2:9092"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "self_managed_event_source.#", "1"), + resource.TestCheckResourceAttr(resourceName, "self_managed_event_source.0.endpoints.KAFKA_BOOTSTRAP_SERVERS", "test1:9092,test2:9092"), + resource.TestCheckResourceAttr(resourceName, "source_access_configuration.#", "3"), + testAccCheckResourceAttrRfc3339(resourceName, "last_modified"), + resource.TestCheckResourceAttr(resourceName, "topics.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "topics.*", "test"), + ), + }, + // batch_size became optional. Ensure that if the user supplies the default + // value, but then moves to not providing the value, that we don't consider this + // a diff. + // Verify also that bootstrap broker order does not matter. + { + PlanOnly: true, + Config: testAccAWSLambdaEventSourceMappingConfigSelfManagedKafka(rName, "null", "test2:9092,test1:9092"), + }, + }, + }) +} + +func TestAccAWSLambdaEventSourceMapping_ActiveMQ(t *testing.T) { + var v lambda.EventSourceMappingConfiguration + resourceName := "aws_lambda_event_source_mapping.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + testAccPreCheck(t) + testAccPreCheckAWSSecretsManager(t) + testAccPartitionHasServicePreCheck("mq", t) + testAccPreCheckAWSMq(t) + }, + ErrorCheck: testAccErrorCheck(t, lambda.EndpointsID, "mq", "secretsmanager"), + Providers: testAccProviders, + CheckDestroy: testAccCheckLambdaEventSourceMappingDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLambdaEventSourceMappingConfigActiveMQ(rName, "100"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsLambdaEventSourceMappingExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "batch_size", "100"), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "queues.#", "1"), + resource.TestCheckTypeSetElemAttr(resourceName, "queues.*", "test"), + resource.TestCheckResourceAttr(resourceName, "source_access_configuration.#", "1"), + ), + }, + // batch_size became optional. Ensure that if the user supplies the default + // value, but then moves to not providing the value, that we don't consider this + // a diff. + { + PlanOnly: true, + Config: testAccAWSLambdaEventSourceMappingConfigActiveMQ(rName, "null"), + }, + }, + }) +} + func testAccCheckAWSLambdaEventSourceMappingIsBeingDisabled(conf *lambda.EventSourceMappingConfiguration) resource.TestCheckFunc { return func(s *terraform.State) error { conn := testAccProvider.Meta().(*AWSClient).lambdaconn @@ -827,197 +977,79 @@ resource "aws_lambda_function" "test" { `, rName) } -func testAccAWSLambdaEventSourceMappingConfigKinesisStartingPositionTimestamp(rName, startingPositionTimestamp string) string { - return composeConfig(testAccAWSLambdaEventSourceMappingConfigKinesisBase(rName) + fmt.Sprintf(` -resource "aws_lambda_event_source_mapping" "test" { - batch_size = 100 - enabled = true - event_source_arn = aws_kinesis_stream.test.arn - function_name = aws_lambda_function.test.arn - starting_position = "AT_TIMESTAMP" - starting_position_timestamp = %[1]q -} -`, startingPositionTimestamp)) -} +func testAccAWSLambdaEventSourceMappingConfigSQSBase(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q -func testAccAWSLambdaEventSourceMappingConfigKinesisBatchWindow(rName string, batchWindow int64) string { - return composeConfig(testAccAWSLambdaEventSourceMappingConfigKinesisBase(rName), fmt.Sprintf(` -resource "aws_lambda_event_source_mapping" "test" { - batch_size = 100 - maximum_batching_window_in_seconds = %[1]d - enabled = true - event_source_arn = aws_kinesis_stream.test.arn - function_name = aws_lambda_function.test.arn - starting_position = "TRIM_HORIZON" + assume_role_policy = < 0 { + ebs.Throughput = aws.Int64(int64(v)) + } + if bd["device_name"].(string) == aws.StringValue(rootDeviceName) { return fmt.Errorf("Root device (%s) declared as an 'ebs_block_device'. Use 'root_block_device' keyword.", *rootDeviceName) } @@ -482,6 +500,10 @@ func resourceAwsLaunchConfigurationCreate(d *schema.ResourceData, meta interface ebs.Iops = aws.Int64(int64(v)) } + if v, ok := bd["throughput"].(int); ok && v > 0 { + ebs.Throughput = aws.Int64(int64(v)) + } + if dn, err := fetchRootDeviceName(d.Get("image_id").(string), ec2conn); err == nil { if dn == nil { return fmt.Errorf( @@ -763,6 +785,9 @@ func readBlockDevicesFromLaunchConfiguration(d *schema.ResourceData, lc *autosca if bdm.Ebs != nil && bdm.Ebs.Iops != nil { bd["iops"] = *bdm.Ebs.Iops } + if bdm.Ebs != nil && bdm.Ebs.Throughput != nil { + bd["throughput"] = *bdm.Ebs.Throughput + } if bdm.Ebs != nil && bdm.Ebs.Encrypted != nil { bd["encrypted"] = *bdm.Ebs.Encrypted } diff --git a/aws/resource_aws_launch_configuration_test.go b/aws/resource_aws_launch_configuration_test.go index 6b7f79deb07..e7ff3eb4ac4 100644 --- a/aws/resource_aws_launch_configuration_test.go +++ b/aws/resource_aws_launch_configuration_test.go @@ -362,6 +362,38 @@ func TestAccAWSLaunchConfiguration_withEncryption(t *testing.T) { }) } +func TestAccAWSLaunchConfiguration_withGP3(t *testing.T) { + var conf autoscaling.LaunchConfiguration + resourceName := "aws_launch_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + ErrorCheck: testAccErrorCheck(t, autoscaling.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSLaunchConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSLaunchConfigurationWithGP3(), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSLaunchConfigurationExists("aws_launch_configuration.test", &conf), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "ebs_block_device.*", map[string]string{ + "volume_type": "gp3", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "ebs_block_device.*", map[string]string{ + "throughput": "150", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"associate_public_ip_address"}, + }, + }, + }) +} + func TestAccAWSLaunchConfiguration_updateEbsBlockDevices(t *testing.T) { var conf autoscaling.LaunchConfiguration resourceName := "aws_launch_configuration.test" @@ -835,6 +867,29 @@ resource "aws_launch_configuration" "test" { `) } +func testAccAWSLaunchConfigurationWithGP3() string { + return composeConfig(testAccLatestAmazonLinuxHvmEbsAmiConfig(), ` +resource "aws_launch_configuration" "test" { + image_id = data.aws_ami.amzn-ami-minimal-hvm-ebs.id + instance_type = "t2.micro" + associate_public_ip_address = false + + root_block_device { + volume_type = "gp3" + volume_size = 11 + } + + ebs_block_device { + volume_type = "gp3" + device_name = "/dev/sdb" + volume_size = 9 + encrypted = true + throughput = 150 + } +} +`) +} + func testAccAWSLaunchConfigurationWithEncryptionUpdated() string { return composeConfig(testAccLatestAmazonLinuxHvmEbsAmiConfig(), ` resource "aws_launch_configuration" "test" { diff --git a/aws/resource_aws_lb_listener_rule.go b/aws/resource_aws_lb_listener_rule.go index 55154b80890..4945fa860e0 100644 --- a/aws/resource_aws_lb_listener_rule.go +++ b/aws/resource_aws_lb_listener_rule.go @@ -168,7 +168,7 @@ func resourceAwsLbbListenerRule() *schema.Resource { Type: schema.TypeString, Optional: true, Default: "#{query}", - ValidateFunc: validation.StringLenBetween(1, 128), + ValidateFunc: validation.StringLenBetween(0, 128), }, "status_code": { diff --git a/aws/resource_aws_lb_listener_rule_test.go b/aws/resource_aws_lb_listener_rule_test.go index 5711ff6ac48..5be0c988090 100644 --- a/aws/resource_aws_lb_listener_rule_test.go +++ b/aws/resource_aws_lb_listener_rule_test.go @@ -4,6 +4,7 @@ import ( "errors" "fmt" "regexp" + "strconv" "testing" "github.com/aws/aws-sdk-go/aws" @@ -285,7 +286,7 @@ func TestAccAWSLBListenerRule_redirect(t *testing.T) { CheckDestroy: testAccCheckAWSLBListenerRuleDestroy, Steps: []resource.TestStep{ { - Config: testAccAWSLBListenerRuleConfig_redirect(lbName), + Config: testAccAWSLBListenerRuleConfig_redirect(lbName, "null"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckAWSLBListenerRuleExists(resourceName, &conf), testAccMatchResourceAttrRegionalARN(resourceName, "arn", "elasticloadbalancing", regexp.MustCompile(fmt.Sprintf(`listener-rule/app/%s/.+$`, lbName))), @@ -308,6 +309,54 @@ func TestAccAWSLBListenerRule_redirect(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "condition.#", "1"), ), }, + { + Config: testAccAWSLBListenerRuleConfig_redirect(lbName, "param1=value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists(resourceName, &conf), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "elasticloadbalancing", regexp.MustCompile(fmt.Sprintf(`listener-rule/app/%s/.+$`, lbName))), + resource.TestCheckResourceAttrPair(resourceName, "listener_arn", frontEndListenerResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "priority", "100"), + resource.TestCheckResourceAttr(resourceName, "action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.type", "redirect"), + resource.TestCheckResourceAttr(resourceName, "action.0.target_group_arn", ""), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.host", "#{host}"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.path", "/#{path}"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.port", "443"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.protocol", "HTTPS"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.query", "param1=value1"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.status_code", "HTTP_301"), + resource.TestCheckResourceAttr(resourceName, "action.0.fixed_response.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.authenticate_cognito.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.authenticate_oidc.#", "0"), + resource.TestCheckResourceAttr(resourceName, "condition.#", "1"), + ), + }, + { + Config: testAccAWSLBListenerRuleConfig_redirect(lbName, ""), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAWSLBListenerRuleExists(resourceName, &conf), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "elasticloadbalancing", regexp.MustCompile(fmt.Sprintf(`listener-rule/app/%s/.+$`, lbName))), + resource.TestCheckResourceAttrPair(resourceName, "listener_arn", frontEndListenerResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "priority", "100"), + resource.TestCheckResourceAttr(resourceName, "action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.order", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.type", "redirect"), + resource.TestCheckResourceAttr(resourceName, "action.0.target_group_arn", ""), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.#", "1"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.host", "#{host}"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.path", "/#{path}"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.port", "443"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.protocol", "HTTPS"), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.query", ""), + resource.TestCheckResourceAttr(resourceName, "action.0.redirect.0.status_code", "HTTP_301"), + resource.TestCheckResourceAttr(resourceName, "action.0.fixed_response.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.authenticate_cognito.#", "0"), + resource.TestCheckResourceAttr(resourceName, "action.0.authenticate_oidc.#", "0"), + resource.TestCheckResourceAttr(resourceName, "condition.#", "1"), + ), + }, }, }) } @@ -2016,7 +2065,11 @@ resource "aws_security_group" "alb_test" { `, lbName, targetGroupName) } -func testAccAWSLBListenerRuleConfig_redirect(lbName string) string { +func testAccAWSLBListenerRuleConfig_redirect(lbName, query string) string { + if query != "null" { + query = strconv.Quote(query) + } + return fmt.Sprintf(` resource "aws_lb_listener_rule" "static" { listener_arn = aws_lb_listener.front_end.arn @@ -2028,6 +2081,7 @@ resource "aws_lb_listener_rule" "static" { redirect { port = "443" protocol = "HTTPS" + query = %[2]s status_code = "HTTP_301" } } @@ -2056,7 +2110,7 @@ resource "aws_lb_listener" "front_end" { } resource "aws_lb" "alb_test" { - name = "%s" + name = %[1]q internal = true security_groups = [aws_security_group.alb_test.id] subnets = aws_subnet.alb_test[*].id @@ -2126,7 +2180,7 @@ resource "aws_security_group" "alb_test" { Name = "TestAccAWSALB_redirect" } } -`, lbName) +`, lbName, query) } func testAccAWSLBListenerRuleConfig_fixedResponse(lbName, response string) string { diff --git a/aws/resource_aws_msk_cluster.go b/aws/resource_aws_msk_cluster.go index 27e3784bf7f..c36694446dc 100644 --- a/aws/resource_aws_msk_cluster.go +++ b/aws/resource_aws_msk_cluster.go @@ -6,7 +6,6 @@ import ( "log" "sort" "strings" - "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafka" @@ -15,6 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/msk/waiter" ) func resourceAwsMskCluster() *schema.Resource { @@ -26,6 +26,13 @@ func resourceAwsMskCluster() *schema.Resource { Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(waiter.ClusterCreateTimeout), + Update: schema.DefaultTimeout(waiter.ClusterUpdateTimeout), + Delete: schema.DefaultTimeout(waiter.ClusterDeleteTimeout), + }, + CustomizeDiff: customdiff.Sequence( customdiff.ForceNewIfChange("kafka_version", func(_ context.Context, old, new, meta interface{}) bool { return new.(string) < old.(string) @@ -41,6 +48,10 @@ func resourceAwsMskCluster() *schema.Resource { Type: schema.TypeString, Computed: true, }, + "bootstrap_brokers_sasl_iam": { + Type: schema.TypeString, + Computed: true, + }, "bootstrap_brokers_sasl_scram": { Type: schema.TypeString, Computed: true, @@ -112,6 +123,11 @@ func resourceAwsMskCluster() *schema.Resource { Optional: true, ForceNew: true, }, + "iam": { + Type: schema.TypeBool, + Optional: true, + ForceNew: true, + }, }, }, ConflictsWith: []string{"client_authentication.0.tls"}, @@ -399,7 +415,7 @@ func waitForMskClusterCreation(conn *kafka.Kafka, arn string) error { input := &kafka.DescribeClusterInput{ ClusterArn: aws.String(arn), } - err := resource.Retry(60*time.Minute, func() *resource.RetryError { + err := resource.Retry(waiter.ClusterCreateTimeout, func() *resource.RetryError { out, err := conn.DescribeCluster(input) if err != nil { return resource.NonRetryableError(err) @@ -462,6 +478,7 @@ func resourceAwsMskClusterRead(d *schema.ResourceData, meta interface{}) error { d.Set("arn", cluster.ClusterArn) d.Set("bootstrap_brokers", sortMskClusterEndpoints(aws.StringValue(brokerOut.BootstrapBrokerString))) + d.Set("bootstrap_brokers_sasl_iam", sortMskClusterEndpoints(aws.StringValue(brokerOut.BootstrapBrokerStringSaslIam))) d.Set("bootstrap_brokers_sasl_scram", sortMskClusterEndpoints(aws.StringValue(brokerOut.BootstrapBrokerStringSaslScram))) d.Set("bootstrap_brokers_tls", sortMskClusterEndpoints(aws.StringValue(brokerOut.BootstrapBrokerStringTls))) @@ -680,7 +697,25 @@ func resourceAwsMskClusterUpdate(d *schema.ResourceData, meta interface{}) error } return resourceAwsMskClusterRead(d, meta) +} + +func resourceAwsMskClusterDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).kafkaconn + log.Printf("[DEBUG] Deleting MSK cluster: %q", d.Id()) + _, err := conn.DeleteCluster(&kafka.DeleteClusterInput{ + ClusterArn: aws.String(d.Id()), + }) + if err != nil { + if isAWSErr(err, kafka.ErrCodeNotFoundException, "") { + return nil + } + return fmt.Errorf("failed deleting MSK cluster %q: %s", d.Id(), err) + } + + log.Printf("[DEBUG] Waiting for MSK cluster %q to be deleted", d.Id()) + + return resourceAwsMskClusterDeleteWaiter(conn, d.Id()) } func expandMskClusterBrokerNodeGroupInfo(l []interface{}) *kafka.BrokerNodeGroupInfo { @@ -712,9 +747,14 @@ func expandMskClusterClientAuthentication(l []interface{}) *kafka.ClientAuthenti m := l[0].(map[string]interface{}) - ca := &kafka.ClientAuthentication{ - Sasl: expandMskClusterScram(m["sasl"].([]interface{})), - Tls: expandMskClusterTls(m["tls"].([]interface{})), + ca := &kafka.ClientAuthentication{} + + if v, ok := m["sasl"].([]interface{}); ok { + ca.Sasl = expandMskClusterSasl(v) + } + + if v, ok := m["tls"].([]interface{}); ok { + ca.Tls = expandMskClusterTls(v) } return ca @@ -770,7 +810,7 @@ func expandMskClusterEncryptionInTransit(l []interface{}) *kafka.EncryptionInTra return eit } -func expandMskClusterScram(l []interface{}) *kafka.Sasl { +func expandMskClusterSasl(l []interface{}) *kafka.Sasl { if len(l) == 0 || l[0] == nil { return nil } @@ -780,10 +820,18 @@ func expandMskClusterScram(l []interface{}) *kafka.Sasl { return nil } - sasl := &kafka.Sasl{ - Scram: &kafka.Scram{ - Enabled: aws.Bool(tfMap["scram"].(bool)), - }, + sasl := &kafka.Sasl{} + + if v, ok := tfMap["scram"].(bool); ok { + sasl.Scram = &kafka.Scram{ + Enabled: aws.Bool(v), + } + } + + if v, ok := tfMap["iam"].(bool); ok { + sasl.Iam = &kafka.Iam{ + Enabled: aws.Bool(v), + } } return sasl @@ -1014,13 +1062,14 @@ func flattenMskSasl(sasl *kafka.Sasl) []map[string]interface{} { } m := map[string]interface{}{ - "scram": flattenMskScram(sasl.Scram), + "scram": flattenMskSaslScram(sasl.Scram), + "iam": flattenMskSaslIam(sasl.Iam), } return []map[string]interface{}{m} } -func flattenMskScram(scram *kafka.Scram) bool { +func flattenMskSaslScram(scram *kafka.Scram) bool { if scram == nil { return false } @@ -1028,6 +1077,14 @@ func flattenMskScram(scram *kafka.Scram) bool { return aws.BoolValue(scram.Enabled) } +func flattenMskSaslIam(iam *kafka.Iam) bool { + if iam == nil { + return false + } + + return aws.BoolValue(iam.Enabled) +} + func flattenMskTls(tls *kafka.Tls) []map[string]interface{} { if tls == nil { return []map[string]interface{}{} @@ -1155,30 +1212,11 @@ func flattenMskLoggingInfoBrokerLogsS3(e *kafka.S3) []map[string]interface{} { return []map[string]interface{}{m} } -func resourceAwsMskClusterDelete(d *schema.ResourceData, meta interface{}) error { - conn := meta.(*AWSClient).kafkaconn - - log.Printf("[DEBUG] Deleting MSK cluster: %q", d.Id()) - _, err := conn.DeleteCluster(&kafka.DeleteClusterInput{ - ClusterArn: aws.String(d.Id()), - }) - if err != nil { - if isAWSErr(err, kafka.ErrCodeNotFoundException, "") { - return nil - } - return fmt.Errorf("failed deleting MSK cluster %q: %s", d.Id(), err) - } - - log.Printf("[DEBUG] Waiting for MSK cluster %q to be deleted", d.Id()) - - return resourceAwsMskClusterDeleteWaiter(conn, d.Id()) -} - func resourceAwsMskClusterDeleteWaiter(conn *kafka.Kafka, arn string) error { input := &kafka.DescribeClusterInput{ ClusterArn: aws.String(arn), } - err := resource.Retry(60*time.Minute, func() *resource.RetryError { + err := resource.Retry(waiter.ClusterDeleteTimeout, func() *resource.RetryError { _, err := conn.DescribeCluster(input) if err != nil { @@ -1235,7 +1273,7 @@ func waitForMskClusterOperation(conn *kafka.Kafka, clusterOperationARN string) e Pending: []string{"PENDING", "UPDATE_IN_PROGRESS"}, Target: []string{"UPDATE_COMPLETE"}, Refresh: mskClusterOperationRefreshFunc(conn, clusterOperationARN), - Timeout: 2 * time.Hour, + Timeout: waiter.ClusterUpdateTimeout, } log.Printf("[DEBUG] Waiting for MSK Cluster Operation (%s) completion", clusterOperationARN) diff --git a/aws/resource_aws_msk_cluster_test.go b/aws/resource_aws_msk_cluster_test.go index 2fb0a1080b0..678167a2db3 100644 --- a/aws/resource_aws_msk_cluster_test.go +++ b/aws/resource_aws_msk_cluster_test.go @@ -57,7 +57,8 @@ func testSweepMskClusters(region string) error { const ( mskClusterPortPlaintext = 9092 - mskClusterPortSasl = 9096 + mskClusterPortSaslScram = 9096 + mskClusterPortSaslIam = 9098 mskClusterPortTls = 9094 mskClusterPortZookeeper = 2181 @@ -68,9 +69,10 @@ const ( ) var ( - mskClusterBoostrapBrokersRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortPlaintext)) - mskClusterBoostrapBrokersSaslRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortSasl)) - mskClusterBoostrapBrokersTlsRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortTls)) + mskClusterBoostrapBrokersRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortPlaintext)) + mskClusterBoostrapBrokersSaslScramRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortSaslScram)) + mskClusterBoostrapBrokersSaslIamRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortSaslIam)) + mskClusterBoostrapBrokersTlsRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortTls)) mskClusterZookeeperConnectStringRegexp = regexp.MustCompile(fmt.Sprintf(mskClusterBrokerRegexpFormat, mskClusterPortZookeeper)) ) @@ -110,15 +112,13 @@ func TestAccAWSMskCluster_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "encryption_info.0.encryption_in_transit.0.client_broker", "TLS"), resource.TestCheckResourceAttr(resourceName, "encryption_info.0.encryption_in_transit.0.in_cluster", "true"), resource.TestCheckResourceAttr(resourceName, "enhanced_monitoring", kafka.EnhancedMonitoringDefault), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.2.1"), + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.7.1"), resource.TestCheckResourceAttr(resourceName, "number_of_broker_nodes", "3"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), resource.TestMatchResourceAttr(resourceName, "zookeeper_connect_string", mskClusterZookeeperConnectStringRegexp), - resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers", ""), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", ""), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_tls", mskClusterBoostrapBrokersTlsRegexp), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), testCheckResourceAttrIsSortedCsv(resourceName, "zookeeper_connect_string"), ), @@ -158,6 +158,9 @@ func TestAccAWSMskCluster_BrokerNodeGroupInfo_EbsVolumeSize(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { // BadRequestException: The minimum increase in storage size of the cluster should be atleast 100GB @@ -199,6 +202,7 @@ func TestAccAWSMskCluster_BrokerNodeGroupInfo_InstanceType(t *testing.T) { ImportStateVerifyIgnore: []string{ "bootstrap_brokers", // API may mutate ordering and selection of brokers to return "bootstrap_brokers_tls", // API may mutate ordering and selection of brokers to return + "current_version", }, }, { @@ -232,11 +236,9 @@ func TestAccAWSMskCluster_ClientAuthentication_Sasl_Scram(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "client_authentication.#", "1"), resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.#", "1"), resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.0.scram", "true"), - resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers", ""), - resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", mskClusterBoostrapBrokersSaslRegexp), + resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", mskClusterBoostrapBrokersSaslScramRegexp), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_tls", ""), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_sasl_scram"), ), }, @@ -256,11 +258,67 @@ func TestAccAWSMskCluster_ClientAuthentication_Sasl_Scram(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "client_authentication.#", "1"), resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.#", "1"), resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.0.scram", "false"), - resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers", ""), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", ""), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_tls", mskClusterBoostrapBrokersTlsRegexp), + testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, + }, + }, + }) +} + +func TestAccAWSMskCluster_ClientAuthentication_Sasl_Iam(t *testing.T) { + var cluster1, cluster2 kafka.ClusterInfo + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_msk_cluster.test" + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSMsk(t) }, + ErrorCheck: testAccErrorCheck(t, kafka.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckMskClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccMskClusterConfigClientAuthenticationSaslIam(rName, true), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckMskClusterExists(resourceName, &cluster1), + resource.TestCheckResourceAttr(resourceName, "client_authentication.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.0.iam", "true"), + resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers", ""), + resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_sasl_iam", mskClusterBoostrapBrokersSaslIamRegexp), + resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_tls", ""), + testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_sasl_iam"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, + }, + { + Config: testAccMskClusterConfigClientAuthenticationSaslIam(rName, false), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckMskClusterExists(resourceName, &cluster2), + testAccCheckMskClusterRecreated(&cluster1, &cluster2), + resource.TestCheckResourceAttr(resourceName, "client_authentication.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.#", "1"), + resource.TestCheckResourceAttr(resourceName, "client_authentication.0.sasl.0.iam", "false"), + resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers", ""), + resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_iam", ""), + resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_tls", mskClusterBoostrapBrokersTlsRegexp), testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), ), }, @@ -268,6 +326,9 @@ func TestAccAWSMskCluster_ClientAuthentication_Sasl_Scram(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, }, }) @@ -299,6 +360,9 @@ func TestAccAWSMskCluster_ClientAuthentication_Tls_CertificateAuthorityArns(t *t ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, }, }) @@ -331,6 +395,9 @@ func TestAccAWSMskCluster_ConfigurationInfo_Revision(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { Config: testAccMskClusterConfigConfigurationInfoRevision2(rName), @@ -394,11 +461,9 @@ func TestAccAWSMskCluster_EncryptionInfo_EncryptionInTransit_ClientBroker(t *tes resource.TestCheckResourceAttr(resourceName, "encryption_info.#", "1"), resource.TestCheckResourceAttr(resourceName, "encryption_info.0.encryption_in_transit.#", "1"), resource.TestCheckResourceAttr(resourceName, "encryption_info.0.encryption_in_transit.0.client_broker", "PLAINTEXT"), - resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers", mskClusterBoostrapBrokersRegexp), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", ""), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_tls", ""), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers"), ), }, @@ -406,6 +471,9 @@ func TestAccAWSMskCluster_EncryptionInfo_EncryptionInTransit_ClientBroker(t *tes ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, }, }) @@ -435,6 +503,9 @@ func TestAccAWSMskCluster_EncryptionInfo_EncryptionInTransit_InCluster(t *testin ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, }, }) @@ -462,6 +533,9 @@ func TestAccAWSMskCluster_EnhancedMonitoring(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { Config: testAccMskClusterConfigEnhancedMonitoring(rName, "PER_TOPIC_PER_BROKER"), @@ -499,7 +573,6 @@ func TestAccAWSMskCluster_NumberOfBrokerNodes(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "broker_node_group_info.0.client_subnets.1", "aws_subnet.example_subnet_az2", "id"), resource.TestCheckResourceAttrPair(resourceName, "broker_node_group_info.0.client_subnets.2", "aws_subnet.example_subnet_az3", "id"), resource.TestCheckResourceAttr(resourceName, "number_of_broker_nodes", "3"), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), ), }, @@ -525,7 +598,6 @@ func TestAccAWSMskCluster_NumberOfBrokerNodes(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "broker_node_group_info.0.client_subnets.1", "aws_subnet.example_subnet_az2", "id"), resource.TestCheckResourceAttrPair(resourceName, "broker_node_group_info.0.client_subnets.2", "aws_subnet.example_subnet_az3", "id"), resource.TestCheckResourceAttr(resourceName, "number_of_broker_nodes", "6"), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), ), }, @@ -610,6 +682,9 @@ func TestAccAWSMskCluster_LoggingInfo(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { Config: testAccMskClusterConfigLoggingInfo(rName, true, true, true), @@ -642,23 +717,26 @@ func TestAccAWSMskCluster_KafkaVersionUpgrade(t *testing.T) { CheckDestroy: testAccCheckMskClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccMskClusterConfigKafkaVersion(rName, "2.2.1"), + Config: testAccMskClusterConfigKafkaVersion(rName, "2.7.1"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster1), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.2.1"), + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.7.1"), ), }, { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { - Config: testAccMskClusterConfigKafkaVersion(rName, "2.4.1.1"), + Config: testAccMskClusterConfigKafkaVersion(rName, "2.8.0"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster2), testAccCheckMskClusterNotRecreated(&cluster1, &cluster2), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.4.1.1"), + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.8.0"), ), }, }, @@ -677,15 +755,13 @@ func TestAccAWSMskCluster_KafkaVersionDowngrade(t *testing.T) { CheckDestroy: testAccCheckMskClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccMskClusterConfigKafkaVersion(rName, "2.4.1.1"), + Config: testAccMskClusterConfigKafkaVersion(rName, "2.8.0"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster1), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.4.1.1"), - + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.8.0"), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers", mskClusterBoostrapBrokersRegexp), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", ""), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_tls", mskClusterBoostrapBrokersTlsRegexp), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers"), testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), ), @@ -694,18 +770,19 @@ func TestAccAWSMskCluster_KafkaVersionDowngrade(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { - Config: testAccMskClusterConfigKafkaVersion(rName, "2.2.1"), + Config: testAccMskClusterConfigKafkaVersion(rName, "2.7.1"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster2), testAccCheckMskClusterRecreated(&cluster1, &cluster2), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.2.1"), - + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.7.1"), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers", mskClusterBoostrapBrokersRegexp), resource.TestCheckResourceAttr(resourceName, "bootstrap_brokers_sasl_scram", ""), resource.TestMatchResourceAttr(resourceName, "bootstrap_brokers_tls", mskClusterBoostrapBrokersTlsRegexp), - testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers"), testCheckResourceAttrIsSortedCsv(resourceName, "bootstrap_brokers_tls"), ), @@ -728,10 +805,10 @@ func TestAccAWSMskCluster_KafkaVersionUpgradeWithConfigurationInfo(t *testing.T) CheckDestroy: testAccCheckMskClusterDestroy, Steps: []resource.TestStep{ { - Config: testAccMskClusterConfigKafkaVersionWithConfigurationInfo(rName, "2.2.1", "config1"), + Config: testAccMskClusterConfigKafkaVersionWithConfigurationInfo(rName, "2.7.1", "config1"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster1), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.2.1"), + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.7.1"), resource.TestCheckResourceAttr(resourceName, "configuration_info.#", "1"), resource.TestCheckResourceAttrPair(resourceName, "configuration_info.0.arn", configurationResourceName1, "arn"), resource.TestCheckResourceAttrPair(resourceName, "configuration_info.0.revision", configurationResourceName1, "latest_revision"), @@ -741,13 +818,16 @@ func TestAccAWSMskCluster_KafkaVersionUpgradeWithConfigurationInfo(t *testing.T) ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, { - Config: testAccMskClusterConfigKafkaVersionWithConfigurationInfo(rName, "2.4.1.1", "config2"), + Config: testAccMskClusterConfigKafkaVersionWithConfigurationInfo(rName, "2.8.0", "config2"), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckMskClusterExists(resourceName, &cluster2), testAccCheckMskClusterNotRecreated(&cluster1, &cluster2), - resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.4.1.1"), + resource.TestCheckResourceAttr(resourceName, "kafka_version", "2.8.0"), resource.TestCheckResourceAttr(resourceName, "configuration_info.#", "1"), resource.TestCheckResourceAttrPair(resourceName, "configuration_info.0.arn", configurationResourceName2, "arn"), resource.TestCheckResourceAttrPair(resourceName, "configuration_info.0.revision", configurationResourceName2, "latest_revision"), @@ -790,6 +870,9 @@ func TestAccAWSMskCluster_Tags(t *testing.T) { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "current_version", + }, }, }, }) @@ -914,22 +997,13 @@ func testAccPreCheckAWSMsk(t *testing.T) { } } -func testAccMskClusterBaseConfig() string { - return ` +func testAccMskClusterBaseConfig(rName string) string { + return composeConfig(testAccAvailableAZsNoOptInConfig(), fmt.Sprintf(` resource "aws_vpc" "example_vpc" { cidr_block = "192.168.0.0/22" tags = { - Name = "tf-testacc-msk-cluster-vpc" - } -} - -data "aws_availability_zones" "available" { - state = "available" - - filter { - name = "opt-in-status" - values = ["opt-in-not-required"] + Name = %[1]q } } @@ -939,7 +1013,7 @@ resource "aws_subnet" "example_subnet_az1" { availability_zone = data.aws_availability_zones.available.names[0] tags = { - Name = "tf-testacc-msk-cluster-subnet-az1" + Name = %[1]q } } @@ -949,7 +1023,7 @@ resource "aws_subnet" "example_subnet_az2" { availability_zone = data.aws_availability_zones.available.names[1] tags = { - Name = "tf-testacc-msk-cluster-subnet-az2" + Name = %[1]q } } @@ -959,21 +1033,21 @@ resource "aws_subnet" "example_subnet_az3" { availability_zone = data.aws_availability_zones.available.names[2] tags = { - Name = "tf-testacc-msk-cluster-subnet-az3" + Name = %[1]q } } resource "aws_security_group" "example_sg" { vpc_id = aws_vpc.example_vpc.id } -` +`, rName)) } func testAccMskClusterConfig_basic(rName string) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_cluster" "test" { cluster_name = %[1]q - kafka_version = "2.2.1" + kafka_version = "2.7.1" number_of_broker_nodes = 3 broker_node_group_info { @@ -983,14 +1057,14 @@ resource "aws_msk_cluster" "test" { security_groups = [aws_security_group.example_sg.id] } } -`, rName) +`, rName)) } func testAccMskClusterConfigBrokerNodeGroupInfoEbsVolumeSize(rName string, ebsVolumeSize int) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_cluster" "test" { cluster_name = %[1]q - kafka_version = "2.2.1" + kafka_version = "2.7.1" number_of_broker_nodes = 3 broker_node_group_info { @@ -1000,14 +1074,14 @@ resource "aws_msk_cluster" "test" { security_groups = [aws_security_group.example_sg.id] } } -`, rName, ebsVolumeSize) +`, rName, ebsVolumeSize)) } func testAccMskClusterConfigBrokerNodeGroupInfoInstanceType(rName string, t string) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_cluster" "test" { cluster_name = %[1]q - kafka_version = "2.2.1" + kafka_version = "2.7.1" number_of_broker_nodes = 3 broker_node_group_info { @@ -1017,11 +1091,11 @@ resource "aws_msk_cluster" "test" { security_groups = [aws_security_group.example_sg.id] } } -`, rName, t) +`, rName, t)) } func testAccMskClusterConfigClientAuthenticationTlsCertificateAuthorityArns(rName string) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_acmpca_certificate_authority" "test" { certificate_authority_configuration { key_algorithm = "RSA_4096" @@ -1035,7 +1109,7 @@ resource "aws_acmpca_certificate_authority" "test" { resource "aws_msk_cluster" "test" { cluster_name = %[1]q - kafka_version = "2.2.1" + kafka_version = "2.7.1" number_of_broker_nodes = 3 broker_node_group_info { @@ -1057,14 +1131,14 @@ resource "aws_msk_cluster" "test" { } } } -`, rName) +`, rName)) } func testAccMskClusterConfigClientAuthenticationSaslScram(rName string, enabled bool) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_cluster" "test" { cluster_name = %[1]q - kafka_version = "2.6.0" + kafka_version = "2.7.1" number_of_broker_nodes = 3 broker_node_group_info { @@ -1080,13 +1154,36 @@ resource "aws_msk_cluster" "test" { } } } -`, rName, enabled) +`, rName, enabled)) +} + +func testAccMskClusterConfigClientAuthenticationSaslIam(rName string, enabled bool) string { + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` +resource "aws_msk_cluster" "test" { + cluster_name = %[1]q + kafka_version = "2.7.1" + number_of_broker_nodes = 3 + + broker_node_group_info { + client_subnets = [aws_subnet.example_subnet_az1.id, aws_subnet.example_subnet_az2.id, aws_subnet.example_subnet_az3.id] + ebs_volume_size = 10 + instance_type = "kafka.m5.large" + security_groups = [aws_security_group.example_sg.id] + } + + client_authentication { + sasl { + iam = %t + } + } +} +`, rName, enabled)) } func testAccMskClusterConfigConfigurationInfoRevision1(rName string) string { - return testAccMskClusterBaseConfig() + fmt.Sprintf(` + return composeConfig(testAccMskClusterBaseConfig(rName), fmt.Sprintf(` resource "aws_msk_configuration" "test" { - kafka_versions = ["2.2.1"] + kafka_versions = ["2.7.1"] name = "%[1]s-1" server_properties = < 0 { + input.KafkaVersions = expandStringSet(v.(*schema.Set)) + } + output, err := conn.CreateConfiguration(input) if err != nil { diff --git a/aws/resource_aws_msk_configuration_test.go b/aws/resource_aws_msk_configuration_test.go index aaab5687196..6be7cc7bcf5 100644 --- a/aws/resource_aws_msk_configuration_test.go +++ b/aws/resource_aws_msk_configuration_test.go @@ -92,7 +92,7 @@ func TestAccAWSMskConfiguration_basic(t *testing.T) { testAccCheckMskConfigurationExists(resourceName, &configuration1), testAccMatchResourceAttrRegionalARN(resourceName, "arn", "kafka", regexp.MustCompile(`configuration/.+`)), resource.TestCheckResourceAttr(resourceName, "description", ""), - resource.TestCheckResourceAttr(resourceName, "kafka_versions.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_versions.#", "0"), resource.TestCheckResourceAttr(resourceName, "latest_revision", "1"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestMatchResourceAttr(resourceName, "server_properties", regexp.MustCompile(`auto.create.topics.enable = true`)), @@ -181,6 +181,8 @@ func TestAccAWSMskConfiguration_KafkaVersions(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckMskConfigurationExists(resourceName, &configuration1), resource.TestCheckResourceAttr(resourceName, "kafka_versions.#", "2"), + resource.TestCheckTypeSetElemAttr(resourceName, "kafka_versions.*", "2.6.0"), + resource.TestCheckTypeSetElemAttr(resourceName, "kafka_versions.*", "2.7.0"), ), }, { @@ -291,8 +293,7 @@ func testAccCheckMskConfigurationExists(resourceName string, configuration *kafk func testAccMskConfigurationConfig(rName string) string { return fmt.Sprintf(` resource "aws_msk_configuration" "test" { - kafka_versions = ["2.1.0"] - name = %[1]q + name = %[1]q server_properties = < 0 { + input.Tags = tags.IgnoreAws().SchemasTags() + } + + log.Printf("[DEBUG] Creating EventBridge Schemas Discoverer: %s", input) + output, err := conn.CreateDiscoverer(input) + + if err != nil { + return fmt.Errorf("error creating EventBridge Schemas Discoverer (%s): %w", sourceARN, err) + } + + d.SetId(aws.StringValue(output.DiscovererId)) + + return resourceAwsSchemasDiscovererRead(d, meta) +} + +func resourceAwsSchemasDiscovererRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + output, err := finder.DiscovererByID(conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] EventBridge Schemas Discoverer (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading EventBridge Schemas Discoverer (%s): %w", d.Id(), err) + } + + d.Set("arn", output.DiscovererArn) + d.Set("description", output.Description) + d.Set("source_arn", output.SourceArn) + + tags, err := keyvaluetags.SchemasListTags(conn, d.Get("arn").(string)) + + if err != nil { + return fmt.Errorf("error listing tags for EventBridge Schemas Discoverer (%s): %w", d.Id(), err) + } + + tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceAwsSchemasDiscovererUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + if d.HasChange("description") { + input := &schemas.UpdateDiscovererInput{ + DiscovererId: aws.String(d.Id()), + Description: aws.String(d.Get("description").(string)), + } + + log.Printf("[DEBUG] Updating EventBridge Schemas Discoverer: %s", input) + _, err := conn.UpdateDiscoverer(input) + + if err != nil { + return fmt.Errorf("error updating EventBridge Schemas Discoverer (%s): %w", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.SchemasUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } + } + + return resourceAwsSchemasDiscovererRead(d, meta) +} + +func resourceAwsSchemasDiscovererDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + log.Printf("[INFO] Deleting EventBridge Schemas Discoverer (%s)", d.Id()) + _, err := conn.DeleteDiscoverer(&schemas.DeleteDiscovererInput{ + DiscovererId: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting EventBridge Schemas Discoverer (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_schemas_discoverer_test.go b/aws/resource_aws_schemas_discoverer_test.go new file mode 100644 index 00000000000..82d271d6b02 --- /dev/null +++ b/aws/resource_aws_schemas_discoverer_test.go @@ -0,0 +1,311 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func init() { + resource.AddTestSweepers("aws_schemas_discoverer", &resource.Sweeper{ + Name: "aws_schemas_discoverer", + F: testSweepSchemasDiscoverers, + }) +} + +func testSweepSchemasDiscoverers(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %w", err) + } + conn := client.(*AWSClient).schemasconn + input := &schemas.ListDiscoverersInput{} + var sweeperErrs *multierror.Error + + err = conn.ListDiscoverersPages(input, func(page *schemas.ListDiscoverersOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, discoverer := range page.Discoverers { + r := resourceAwsSchemasDiscoverer() + d := r.Data(nil) + d.SetId(aws.StringValue(discoverer.DiscovererId)) + err = r.Delete(d, client) + + if err != nil { + log.Printf("[ERROR] %s", err) + sweeperErrs = multierror.Append(sweeperErrs, err) + continue + } + } + + return !lastPage + }) + + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EventBridge Schemas Discoverer sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EventBridge Schemas Discoverers: %w", err)) + } + + return sweeperErrs.ErrorOrNil() +} + +func TestAccAWSSchemasDiscoverer_basic(t *testing.T) { + var v schemas.DescribeDiscovererOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_discoverer.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasDiscovererDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasDiscovererConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "schemas", fmt.Sprintf("discoverer/events-event-bus-%s", rName)), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSchemasDiscoverer_disappears(t *testing.T) { + var v schemas.DescribeDiscovererOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_discoverer.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasDiscovererDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasDiscovererConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + testAccCheckResourceDisappears(testAccProvider, resourceAwsSchemasDiscoverer(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSSchemasDiscoverer_Description(t *testing.T) { + var v schemas.DescribeDiscovererOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_discoverer.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasDiscovererDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasDiscovererConfigDescription(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasDiscovererConfigDescription(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + Config: testAccAWSSchemasDiscovererConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", ""), + ), + }, + }, + }) +} + +func TestAccAWSSchemasDiscoverer_Tags(t *testing.T) { + var v schemas.DescribeDiscovererOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_discoverer.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasDiscovererDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasDiscovererConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasDiscovererConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSSchemasDiscovererConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasDiscovererExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckAWSSchemasDiscovererDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_schemas_discoverer" { + continue + } + + _, err := finder.DiscovererByID(conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("EventBridge Schemas Discoverer %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccCheckSchemasDiscovererExists(n string, v *schemas.DescribeDiscovererOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EventBridge Schemas Discoverer ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + output, err := finder.DiscovererByID(conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccAWSSchemasDiscovererConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_bus" "test" { + name = %[1]q +} + +resource "aws_schemas_discoverer" "test" { + source_arn = aws_cloudwatch_event_bus.test.arn +} +`, rName) +} + +func testAccAWSSchemasDiscovererConfigDescription(rName, description string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_bus" "test" { + name = %[1]q +} + +resource "aws_schemas_discoverer" "test" { + source_arn = aws_cloudwatch_event_bus.test.arn + + description = %[2]q +} +`, rName, description) +} + +func testAccAWSSchemasDiscovererConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_bus" "test" { + name = %[1]q +} + +resource "aws_schemas_discoverer" "test" { + source_arn = aws_cloudwatch_event_bus.test.arn + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccAWSSchemasDiscovererConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_cloudwatch_event_bus" "test" { + name = %[1]q +} + +resource "aws_schemas_discoverer" "test" { + source_arn = aws_cloudwatch_event_bus.test.arn + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/aws/resource_aws_schemas_registry.go b/aws/resource_aws_schemas_registry.go new file mode 100644 index 00000000000..c14839e3854 --- /dev/null +++ b/aws/resource_aws_schemas_registry.go @@ -0,0 +1,172 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsSchemasRegistry() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSchemasRegistryCreate, + Read: resourceAwsSchemasRegistryRead, + Update: resourceAwsSchemasRegistryUpdate, + Delete: resourceAwsSchemasRegistryDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 256), + }, + + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64), + validation.StringMatch(regexp.MustCompile(`^[\.\-_A-Za-z0-9]+`), ""), + ), + }, + + "tags": tagsSchema(), + "tags_all": tagsSchemaComputed(), + }, + + CustomizeDiff: SetTagsDiff, + } +} + +func resourceAwsSchemasRegistryCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + + name := d.Get("name").(string) + input := &schemas.CreateRegistryInput{ + RegistryName: aws.String(name), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + if len(tags) > 0 { + input.Tags = tags.IgnoreAws().SchemasTags() + } + + log.Printf("[DEBUG] Creating EventBridge Schemas Registry: %s", input) + _, err := conn.CreateRegistry(input) + + if err != nil { + return fmt.Errorf("error creating EventBridge Schemas Registry (%s): %w", name, err) + } + + d.SetId(aws.StringValue(input.RegistryName)) + + return resourceAwsSchemasRegistryRead(d, meta) +} + +func resourceAwsSchemasRegistryRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + output, err := finder.RegistryByName(conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] EventBridge Schemas Registry (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading EventBridge Schemas Registry (%s): %w", d.Id(), err) + } + + d.Set("arn", output.RegistryArn) + d.Set("description", output.Description) + d.Set("name", output.RegistryName) + + tags, err := keyvaluetags.SchemasListTags(conn, d.Get("arn").(string)) + + tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + if err != nil { + return fmt.Errorf("error listing tags for EventBridge Schemas Registry (%s): %w", d.Id(), err) + } + + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceAwsSchemasRegistryUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + if d.HasChanges("description") { + input := &schemas.UpdateRegistryInput{ + Description: aws.String(d.Get("description").(string)), + RegistryName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Updating EventBridge Schemas Registry: %s", input) + _, err := conn.UpdateRegistry(input) + + if err != nil { + return fmt.Errorf("error updating EventBridge Schemas Registry (%s): %w", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.SchemasUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } + } + + return resourceAwsSchemasRegistryRead(d, meta) +} + +func resourceAwsSchemasRegistryDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + log.Printf("[INFO] Deleting EventBridge Schemas Registry (%s)", d.Id()) + _, err := conn.DeleteRegistry(&schemas.DeleteRegistryInput{ + RegistryName: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting EventBridge Schemas Registry (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_schemas_registry_test.go b/aws/resource_aws_schemas_registry_test.go new file mode 100644 index 00000000000..f53d09a2885 --- /dev/null +++ b/aws/resource_aws_schemas_registry_test.go @@ -0,0 +1,337 @@ +package aws + +import ( + "fmt" + "log" + "strings" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfschemas "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func init() { + resource.AddTestSweepers("aws_schemas_registry", &resource.Sweeper{ + Name: "aws_schemas_registry", + F: testSweepSchemasRegistries, + }) +} + +func testSweepSchemasRegistries(region string) error { + client, err := sharedClientForRegion(region) + if err != nil { + return fmt.Errorf("Error getting client: %w", err) + } + conn := client.(*AWSClient).schemasconn + input := &schemas.ListRegistriesInput{} + var sweeperErrs *multierror.Error + + err = conn.ListRegistriesPages(input, func(page *schemas.ListRegistriesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, registry := range page.Registries { + registryName := aws.StringValue(registry.RegistryName) + + input := &schemas.ListSchemasInput{ + RegistryName: aws.String(registryName), + } + + err = conn.ListSchemasPages(input, func(page *schemas.ListSchemasOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, schema := range page.Schemas { + schemaName := aws.StringValue(schema.SchemaName) + if strings.HasPrefix(schemaName, "aws.") { + continue + } + + r := resourceAwsSchemasSchema() + d := r.Data(nil) + d.SetId(tfschemas.SchemaCreateResourceID(schemaName, registryName)) + err = r.Delete(d, client) + + if err != nil { + log.Printf("[ERROR] %s", err) + sweeperErrs = multierror.Append(sweeperErrs, err) + continue + } + } + + return !lastPage + }) + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EventBridge Schemas Schemas: %w", err)) + } + + if strings.HasPrefix(registryName, "aws.") { + continue + } + + r := resourceAwsSchemasRegistry() + d := r.Data(nil) + d.SetId(registryName) + err = r.Delete(d, client) + + if err != nil { + log.Printf("[ERROR] %s", err) + sweeperErrs = multierror.Append(sweeperErrs, err) + continue + } + } + + return !lastPage + }) + + if testSweepSkipSweepError(err) { + log.Printf("[WARN] Skipping EventBridge Schemas Registry sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() // In case we have completed some pages, but had errors + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing EventBridge Schemas Registries: %w", err)) + } + + return sweeperErrs.ErrorOrNil() +} + +func TestAccAWSSchemasRegistry_basic(t *testing.T) { + var v schemas.DescribeRegistryOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_registry.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasRegistryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasRegistryConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "schemas", fmt.Sprintf("registry/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSchemasRegistry_disappears(t *testing.T) { + var v schemas.DescribeRegistryOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_registry.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasRegistryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasRegistryConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + testAccCheckResourceDisappears(testAccProvider, resourceAwsSchemasRegistry(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSSchemasRegistry_Description(t *testing.T) { + var v schemas.DescribeRegistryOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_registry.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasRegistryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasRegistryConfigDescription(rName, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasRegistryConfigDescription(rName, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + ), + }, + { + Config: testAccAWSSchemasRegistryConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", ""), + ), + }, + }, + }) +} + +func TestAccAWSSchemasRegistry_Tags(t *testing.T) { + var v schemas.DescribeRegistryOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_registry.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasRegistryDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasRegistryConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasRegistryConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSSchemasRegistryConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasRegistryExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckAWSSchemasRegistryDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_schemas_registry" { + continue + } + + _, err := finder.RegistryByName(conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("EventBridge Schemas Registry %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccCheckSchemasRegistryExists(n string, v *schemas.DescribeRegistryOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EventBridge Schemas Registry ID is set") + } + + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + output, err := finder.RegistryByName(conn, rs.Primary.ID) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccAWSSchemasRegistryConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q +} +`, rName) +} + +func testAccAWSSchemasRegistryConfigDescription(rName, description string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q + description = %[2]q +} +`, rName, description) +} + +func testAccAWSSchemasRegistryConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccAWSSchemasRegistryConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/aws/resource_aws_schemas_schema.go b/aws/resource_aws_schemas_schema.go new file mode 100644 index 00000000000..ac1b657dbf8 --- /dev/null +++ b/aws/resource_aws_schemas_schema.go @@ -0,0 +1,255 @@ +package aws + +import ( + "fmt" + "log" + "regexp" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags" + tfschemas "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsSchemasSchema() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsSchemasSchemaCreate, + Read: resourceAwsSchemasSchemaRead, + Update: resourceAwsSchemasSchemaUpdate, + Delete: resourceAwsSchemasSchemaDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + + "content": { + Type: schema.TypeString, + Required: true, + DiffSuppressFunc: suppressEquivalentJsonDiffs, + }, + + "description": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 256), + }, + + "last_modified": { + Type: schema.TypeString, + Computed: true, + }, + + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 385), + validation.StringMatch(regexp.MustCompile(`^[\.\-_A-Za-z@]+`), ""), + ), + }, + + "registry_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(schemas.Type_Values(), true), + }, + + "version": { + Type: schema.TypeString, + Computed: true, + }, + + "version_created_date": { + Type: schema.TypeString, + Computed: true, + }, + + "tags": tagsSchema(), + "tags_all": tagsSchemaComputed(), + }, + + CustomizeDiff: SetTagsDiff, + } +} + +func resourceAwsSchemasSchemaCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(keyvaluetags.New(d.Get("tags").(map[string]interface{}))) + + name := d.Get("name").(string) + registryName := d.Get("registry_name").(string) + input := &schemas.CreateSchemaInput{ + Content: aws.String(d.Get("content").(string)), + RegistryName: aws.String(registryName), + SchemaName: aws.String(name), + Type: aws.String(d.Get("type").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.Description = aws.String(v.(string)) + } + + if len(tags) > 0 { + input.Tags = tags.IgnoreAws().SchemasTags() + } + + id := tfschemas.SchemaCreateResourceID(name, registryName) + + log.Printf("[DEBUG] Creating EventBridge Schemas Schema: %s", input) + _, err := conn.CreateSchema(input) + + if err != nil { + return fmt.Errorf("error creating EventBridge Schemas Schema (%s): %w", id, err) + } + + d.SetId(id) + + return resourceAwsSchemasSchemaRead(d, meta) +} + +func resourceAwsSchemasSchemaRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + defaultTagsConfig := meta.(*AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig + + name, registryName, err := tfschemas.SchemaParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing EventBridge Schemas Schema ID: %w", err) + } + + output, err := finder.SchemaByNameAndRegistryName(conn, name, registryName) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] EventBridge Schemas Schema (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading EventBridge Schemas Schema (%s): %w", d.Id(), err) + } + + d.Set("arn", output.SchemaArn) + d.Set("content", output.Content) + d.Set("description", output.Description) + if output.LastModified != nil { + d.Set("last_modified", aws.TimeValue(output.LastModified).Format(time.RFC3339)) + } else { + d.Set("last_modified", nil) + } + d.Set("name", output.SchemaName) + d.Set("registry_name", registryName) + d.Set("type", output.Type) + d.Set("version", output.SchemaVersion) + if output.VersionCreatedDate != nil { + d.Set("version_created_date", aws.TimeValue(output.VersionCreatedDate).Format(time.RFC3339)) + } else { + d.Set("version_created_date", nil) + } + + tags, err := keyvaluetags.SchemasListTags(conn, d.Get("arn").(string)) + + if err != nil { + return fmt.Errorf("error listing tags for EventBridge Schemas Schema (%s): %w", d.Id(), err) + } + + tags = tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig) + + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceAwsSchemasSchemaUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + if d.HasChanges("content", "description", "type") { + name, registryName, err := tfschemas.SchemaParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing EventBridge Schemas Schema ID: %w", err) + } + + input := &schemas.UpdateSchemaInput{ + RegistryName: aws.String(registryName), + SchemaName: aws.String(name), + } + + if d.HasChanges("content", "type") { + input.Content = aws.String(d.Get("content").(string)) + input.Type = aws.String(d.Get("type").(string)) + } + + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + + log.Printf("[DEBUG] Updating EventBridge Schemas Schema: %s", input) + _, err = conn.UpdateSchema(input) + + if err != nil { + return fmt.Errorf("error updating EventBridge Schemas Schema (%s): %w", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := keyvaluetags.SchemasUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } + } + + return resourceAwsSchemasSchemaRead(d, meta) +} + +func resourceAwsSchemasSchemaDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).schemasconn + + name, registryName, err := tfschemas.SchemaParseResourceID(d.Id()) + + if err != nil { + return fmt.Errorf("error parsing EventBridge Schemas Schema ID: %w", err) + } + + log.Printf("[INFO] Deleting EventBridge Schemas Schema (%s)", d.Id()) + _, err = conn.DeleteSchema(&schemas.DeleteSchemaInput{ + RegistryName: aws.String(registryName), + SchemaName: aws.String(name), + }) + + if tfawserr.ErrCodeEquals(err, schemas.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting EventBridge Schemas Schema (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_schemas_schema_test.go b/aws/resource_aws_schemas_schema_test.go new file mode 100644 index 00000000000..21d5155c993 --- /dev/null +++ b/aws/resource_aws_schemas_schema_test.go @@ -0,0 +1,347 @@ +package aws + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/schemas" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfschemas "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/schemas/finder" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +const ( + testAccAWSSchemasSchemaContent = ` +{ + "openapi": "3.0.0", + "info": { + "version": "1.0.0", + "title": "Event" + }, + "paths": {}, + "components": { + "schemas": { + "Event": { + "type": "object", + "properties": { + "name": { + "type": "string" + } + } + } + } + } +} +` + + testAccAWSSchemasSchemaContentUpdated = ` +{ + "openapi": "3.0.0", + "info": { + "version": "2.0.0", + "title": "Event" + }, + "paths": {}, + "components": { + "schemas": { + "Event": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "created_at": { + "type": "string", + "format": "date-time" + } + } + } + } + } +} +` +) + +func TestAccAWSSchemasSchema_basic(t *testing.T) { + var v schemas.DescribeSchemaOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_schema.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasSchemaDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasSchemaConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + testAccCheckResourceAttrRegionalARN(resourceName, "arn", "schemas", fmt.Sprintf("schema/%s/%s", rName, rName)), + resource.TestCheckResourceAttrSet(resourceName, "content"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttrSet(resourceName, "last_modified"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "registry_name", rName), + resource.TestCheckResourceAttr(resourceName, "type", "OpenApi3"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "version", "1"), + resource.TestCheckResourceAttrSet(resourceName, "version_created_date"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSSchemasSchema_disappears(t *testing.T) { + var v schemas.DescribeSchemaOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_schema.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasSchemaDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasSchemaConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + testAccCheckResourceDisappears(testAccProvider, resourceAwsSchemasSchema(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccAWSSchemasSchema_ContentDescription(t *testing.T) { + var v schemas.DescribeSchemaOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_schema.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasSchemaDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasSchemaConfigContentDescription(rName, testAccAWSSchemasSchemaContent, "description1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "content", testAccAWSSchemasSchemaContent), + resource.TestCheckResourceAttr(resourceName, "description", "description1"), + resource.TestCheckResourceAttr(resourceName, "version", "1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasSchemaConfigContentDescription(rName, testAccAWSSchemasSchemaContentUpdated, "description2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "content", testAccAWSSchemasSchemaContentUpdated), + resource.TestCheckResourceAttr(resourceName, "description", "description2"), + resource.TestCheckResourceAttr(resourceName, "version", "2"), + ), + }, + { + Config: testAccAWSSchemasSchemaConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "version", "3"), + ), + }, + }, + }) +} + +func TestAccAWSSchemasSchema_Tags(t *testing.T) { + var v schemas.DescribeSchemaOutput + rName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_schemas_schema.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck(schemas.EndpointsID, t) }, + ErrorCheck: testAccErrorCheck(t, schemas.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAWSSchemasSchemaDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSSchemasSchemaConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccAWSSchemasSchemaConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccAWSSchemasSchemaConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckSchemasSchemaExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func testAccCheckAWSSchemasSchemaDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_schemas_schema" { + continue + } + + name, registryName, err := tfschemas.SchemaParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + _, err = finder.SchemaByNameAndRegistryName(conn, name, registryName) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("EventBridge Schemas Schema %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccCheckSchemasSchemaExists(n string, v *schemas.DescribeSchemaOutput) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No EventBridge Schemas Schema ID is set") + } + + name, registryName, err := tfschemas.SchemaParseResourceID(rs.Primary.ID) + + if err != nil { + return err + } + + conn := testAccProvider.Meta().(*AWSClient).schemasconn + + output, err := finder.SchemaByNameAndRegistryName(conn, name, registryName) + + if err != nil { + return err + } + + *v = *output + + return nil + } +} + +func testAccAWSSchemasSchemaConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q +} + +resource "aws_schemas_schema" "test" { + name = %[1]q + registry_name = aws_schemas_registry.test.name + type = "OpenApi3" + content = %[2]q +} +`, rName, testAccAWSSchemasSchemaContent) +} + +func testAccAWSSchemasSchemaConfigContentDescription(rName, content, description string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q +} + +resource "aws_schemas_schema" "test" { + name = %[1]q + registry_name = aws_schemas_registry.test.name + type = "OpenApi3" + content = %[2]q + description = %[3]q +} +`, rName, content, description) +} + +func testAccAWSSchemasSchemaConfigTags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q +} + +resource "aws_schemas_schema" "test" { + name = %[1]q + registry_name = aws_schemas_registry.test.name + type = "OpenApi3" + content = %[2]q + + tags = { + %[3]q = %[4]q + } +} +`, rName, testAccAWSSchemasSchemaContent, tagKey1, tagValue1) +} + +func testAccAWSSchemasSchemaConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_schemas_registry" "test" { + name = %[1]q +} + +resource "aws_schemas_schema" "test" { + name = %[1]q + registry_name = aws_schemas_registry.test.name + type = "OpenApi3" + content = %[2]q + + tags = { + %[3]q = %[4]q + %[5]q = %[6]q + } +} +`, rName, testAccAWSSchemasSchemaContent, tagKey1, tagValue1, tagKey2, tagValue2) +} diff --git a/aws/resource_aws_servicecatalog_budget_resource_association.go b/aws/resource_aws_servicecatalog_budget_resource_association.go new file mode 100644 index 00000000000..1d7a4925a9f --- /dev/null +++ b/aws/resource_aws_servicecatalog_budget_resource_association.go @@ -0,0 +1,146 @@ +package aws + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/servicecatalog" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + iamwaiter "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/iam/waiter" + tfservicecatalog "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog/waiter" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +func resourceAwsServiceCatalogBudgetResourceAssociation() *schema.Resource { + return &schema.Resource{ + Create: resourceAwsServiceCatalogBudgetResourceAssociationCreate, + Read: resourceAwsServiceCatalogBudgetResourceAssociationRead, + Delete: resourceAwsServiceCatalogBudgetResourceAssociationDelete, + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "budget_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "resource_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + }, + } +} + +func resourceAwsServiceCatalogBudgetResourceAssociationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).scconn + + input := &servicecatalog.AssociateBudgetWithResourceInput{ + BudgetName: aws.String(d.Get("budget_name").(string)), + ResourceId: aws.String(d.Get("resource_id").(string)), + } + + var output *servicecatalog.AssociateBudgetWithResourceOutput + err := resource.Retry(iamwaiter.PropagationTimeout, func() *resource.RetryError { + var err error + + output, err = conn.AssociateBudgetWithResource(input) + + if tfawserr.ErrMessageContains(err, servicecatalog.ErrCodeInvalidParametersException, "profile does not exist") { + return resource.RetryableError(err) + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return nil + }) + + if tfresource.TimedOut(err) { + output, err = conn.AssociateBudgetWithResource(input) + } + + if err != nil { + return fmt.Errorf("error associating Service Catalog Budget with Resource: %w", err) + } + + if output == nil { + return fmt.Errorf("error creating Service Catalog Budget Resource Association: empty response") + } + + d.SetId(tfservicecatalog.BudgetResourceAssociationID(d.Get("budget_name").(string), d.Get("resource_id").(string))) + + return resourceAwsServiceCatalogBudgetResourceAssociationRead(d, meta) +} + +func resourceAwsServiceCatalogBudgetResourceAssociationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).scconn + + budgetName, resourceID, err := tfservicecatalog.BudgetResourceAssociationParseID(d.Id()) + + if err != nil { + return fmt.Errorf("could not parse ID (%s): %w", d.Id(), err) + } + + output, err := waiter.BudgetResourceAssociationReady(conn, budgetName, resourceID) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Service Catalog Budget Resource Association (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error describing Service Catalog Budget Resource Association (%s): %w", d.Id(), err) + } + + if output == nil { + return fmt.Errorf("error getting Service Catalog Budget Resource Association (%s): empty response", d.Id()) + } + + d.Set("resource_id", resourceID) + d.Set("budget_name", output.BudgetName) + + return nil +} + +func resourceAwsServiceCatalogBudgetResourceAssociationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*AWSClient).scconn + + budgetName, resourceID, err := tfservicecatalog.BudgetResourceAssociationParseID(d.Id()) + + if err != nil { + return fmt.Errorf("could not parse ID (%s): %w", d.Id(), err) + } + + input := &servicecatalog.DisassociateBudgetFromResourceInput{ + ResourceId: aws.String(resourceID), + BudgetName: aws.String(budgetName), + } + + _, err = conn.DisassociateBudgetFromResource(input) + + if tfawserr.ErrCodeEquals(err, servicecatalog.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return fmt.Errorf("error disassociating Service Catalog Budget from Resource (%s): %w", d.Id(), err) + } + + err = waiter.BudgetResourceAssociationDeleted(conn, budgetName, resourceID) + + if err != nil && !tfresource.NotFound(err) { + return fmt.Errorf("error waiting for Service Catalog Budget Resource Disassociation (%s): %w", d.Id(), err) + } + + return nil +} diff --git a/aws/resource_aws_servicecatalog_budget_resource_association_test.go b/aws/resource_aws_servicecatalog_budget_resource_association_test.go new file mode 100644 index 00000000000..160fb3c9886 --- /dev/null +++ b/aws/resource_aws_servicecatalog_budget_resource_association_test.go @@ -0,0 +1,268 @@ +package aws + +import ( + "fmt" + "log" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/servicecatalog" + multierror "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfservicecatalog "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog/waiter" + "github.com/terraform-providers/terraform-provider-aws/aws/internal/tfresource" +) + +// add sweeper to delete known test servicecat budget resource associations +func init() { + resource.AddTestSweepers("aws_servicecatalog_budget_resource_association", &resource.Sweeper{ + Name: "aws_servicecatalog_budget_resource_association", + Dependencies: []string{}, + F: testSweepServiceCatalogBudgetResourceAssociations, + }) +} + +func testSweepServiceCatalogBudgetResourceAssociations(region string) error { + client, err := sharedClientForRegion(region) + + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + + conn := client.(*AWSClient).scconn + sweepResources := make([]*testSweepResource, 0) + var errs *multierror.Error + + input := &servicecatalog.ListPortfoliosInput{} + + err = conn.ListPortfoliosPages(input, func(page *servicecatalog.ListPortfoliosOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, port := range page.PortfolioDetails { + if port == nil { + continue + } + + resInput := &servicecatalog.ListBudgetsForResourceInput{ + ResourceId: port.Id, + } + + err = conn.ListBudgetsForResourcePages(resInput, func(page *servicecatalog.ListBudgetsForResourceOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, budget := range page.Budgets { + if budget == nil { + continue + } + + r := resourceAwsServiceCatalogBudgetResourceAssociation() + d := r.Data(nil) + d.SetId(tfservicecatalog.BudgetResourceAssociationID(aws.StringValue(budget.BudgetName), aws.StringValue(port.Id))) + + sweepResources = append(sweepResources, NewTestSweepResource(r, d, client)) + } + + return !lastPage + }) + } + + return !lastPage + }) + + if err != nil { + errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Budget Resource (Portfolio) Associations for %s: %w", region, err)) + } + + prodInput := &servicecatalog.SearchProductsAsAdminInput{} + + err = conn.SearchProductsAsAdminPages(prodInput, func(page *servicecatalog.SearchProductsAsAdminOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, pvd := range page.ProductViewDetails { + if pvd == nil || pvd.ProductViewSummary == nil { + continue + } + + resInput := &servicecatalog.ListBudgetsForResourceInput{ + ResourceId: pvd.ProductViewSummary.ProductId, + } + + err = conn.ListBudgetsForResourcePages(resInput, func(page *servicecatalog.ListBudgetsForResourceOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, budget := range page.Budgets { + if budget == nil { + continue + } + + r := resourceAwsServiceCatalogBudgetResourceAssociation() + d := r.Data(nil) + d.SetId(tfservicecatalog.BudgetResourceAssociationID(aws.StringValue(budget.BudgetName), aws.StringValue(pvd.ProductViewSummary.ProductId))) + + sweepResources = append(sweepResources, NewTestSweepResource(r, d, client)) + } + + return !lastPage + }) + } + + return !lastPage + }) + + if err != nil { + errs = multierror.Append(errs, fmt.Errorf("error describing Service Catalog Budget Resource (Product) Associations for %s: %w", region, err)) + } + + if err = testSweepResourceOrchestrator(sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping Service Catalog Budget Resource Associations for %s: %w", region, err)) + } + + if testSweepSkipSweepError(errs.ErrorOrNil()) { + log.Printf("[WARN] Skipping Service Catalog Budget Resource Associations sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} + +func TestAccAWSServiceCatalogBudgetResourceAssociation_basic(t *testing.T) { + resourceName := "aws_servicecatalog_budget_resource_association.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck("budgets", t) }, + ErrorCheck: testAccErrorCheck(t, servicecatalog.EndpointsID, "budgets"), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsServiceCatalogBudgetResourceAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSServiceCatalogBudgetResourceAssociationConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsServiceCatalogBudgetResourceAssociationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "resource_id", "aws_servicecatalog_portfolio.test", "id"), + resource.TestCheckResourceAttrPair(resourceName, "budget_name", "aws_budgets_budget.test", "name"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccAWSServiceCatalogBudgetResourceAssociation_disappears(t *testing.T) { + resourceName := "aws_servicecatalog_budget_resource_association.test" + rName := acctest.RandomWithPrefix("tf-acc-test") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPartitionHasServicePreCheck("budgets", t) }, + ErrorCheck: testAccErrorCheck(t, servicecatalog.EndpointsID, "budgets"), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsServiceCatalogBudgetResourceAssociationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSServiceCatalogBudgetResourceAssociationConfig_basic(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsServiceCatalogBudgetResourceAssociationExists(resourceName), + testAccCheckResourceDisappears(testAccProvider, resourceAwsServiceCatalogBudgetResourceAssociation(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAwsServiceCatalogBudgetResourceAssociationDestroy(s *terraform.State) error { + conn := testAccProvider.Meta().(*AWSClient).scconn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_servicecatalog_budget_resource_association" { + continue + } + + budgetName, resourceID, err := tfservicecatalog.BudgetResourceAssociationParseID(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) + } + + err = waiter.BudgetResourceAssociationDeleted(conn, budgetName, resourceID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return fmt.Errorf("waiting for Service Catalog Budget Resource Association to be destroyed (%s): %w", rs.Primary.ID, err) + } + } + + return nil +} + +func testAccCheckAwsServiceCatalogBudgetResourceAssociationExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + + if !ok { + return fmt.Errorf("resource not found: %s", resourceName) + } + + budgetName, resourceID, err := tfservicecatalog.BudgetResourceAssociationParseID(rs.Primary.ID) + + if err != nil { + return fmt.Errorf("could not parse ID (%s): %w", rs.Primary.ID, err) + } + + conn := testAccProvider.Meta().(*AWSClient).scconn + + _, err = waiter.BudgetResourceAssociationReady(conn, budgetName, resourceID) + + if err != nil { + return fmt.Errorf("waiting for Service Catalog Budget Resource Association existence (%s): %w", rs.Primary.ID, err) + } + + return nil + } +} + +func testAccAWSServiceCatalogBudgetResourceAssociationConfig_base(rName, budgetType, limitAmount, limitUnit, timePeriodStart, timeUnit string) string { + return fmt.Sprintf(` +resource "aws_servicecatalog_portfolio" "test" { + name = %[1]q + description = %[1]q + provider_name = %[1]q +} + +resource "aws_budgets_budget" "test" { + name = %[1]q + budget_type = %[2]q + limit_amount = %[3]q + limit_unit = %[4]q + time_period_start = %[5]q + time_unit = %[6]q +} +`, rName, budgetType, limitAmount, limitUnit, timePeriodStart, timeUnit) +} + +func testAccAWSServiceCatalogBudgetResourceAssociationConfig_basic(rName string) string { + return composeConfig(testAccAWSServiceCatalogBudgetResourceAssociationConfig_base(rName, "COST", "100.0", "USD", "2017-01-01_12:00", "MONTHLY"), fmt.Sprintf(` +resource "aws_servicecatalog_budget_resource_association" "test" { + resource_id = aws_servicecatalog_portfolio.test.id + budget_name = %[1]q +} +`, rName)) +} diff --git a/aws/resource_aws_servicecatalog_constraint.go b/aws/resource_aws_servicecatalog_constraint.go index 6bc38d756ed..730a57020da 100644 --- a/aws/resource_aws_servicecatalog_constraint.go +++ b/aws/resource_aws_servicecatalog_constraint.go @@ -30,7 +30,7 @@ func resourceAwsServiceCatalogConstraint() *schema.Resource { "accept_language": { Type: schema.TypeString, Optional: true, - Default: "en", + Default: tfservicecatalog.AcceptLanguageEnglish, ValidateFunc: validation.StringInSlice(tfservicecatalog.AcceptLanguage_Values(), false), }, "description": { @@ -151,7 +151,7 @@ func resourceAwsServiceCatalogConstraintRead(d *schema.ResourceData, meta interf acceptLanguage := d.Get("accept_language").(string) if acceptLanguage == "" { - acceptLanguage = "en" + acceptLanguage = tfservicecatalog.AcceptLanguageEnglish } d.Set("accept_language", acceptLanguage) diff --git a/aws/resource_aws_servicecatalog_constraint_test.go b/aws/resource_aws_servicecatalog_constraint_test.go index 84237028eaa..135b8ff70fe 100644 --- a/aws/resource_aws_servicecatalog_constraint_test.go +++ b/aws/resource_aws_servicecatalog_constraint_test.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + tfservicecatalog "github.com/terraform-providers/terraform-provider-aws/aws/internal/service/servicecatalog" ) // add sweeper to delete known test servicecat constraints @@ -106,7 +107,7 @@ func TestAccAWSServiceCatalogConstraint_basic(t *testing.T) { Config: testAccAWSServiceCatalogConstraintConfig_basic(rName, rName), Check: resource.ComposeTestCheckFunc( testAccCheckAwsServiceCatalogConstraintExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "accept_language", "en"), + resource.TestCheckResourceAttr(resourceName, "accept_language", tfservicecatalog.AcceptLanguageEnglish), resource.TestCheckResourceAttr(resourceName, "description", rName), resource.TestCheckResourceAttr(resourceName, "type", "NOTIFICATION"), resource.TestCheckResourceAttrPair(resourceName, "portfolio_id", "aws_servicecatalog_portfolio.test", "id"), @@ -240,31 +241,27 @@ resource "aws_s3_bucket_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" - content = < 0 { - action.Allow = &wafv2.AllowAction{} + action.Allow = expandWafv2AllowAction(v.([]interface{})) } if v, ok := m["block"]; ok && len(v.([]interface{})) > 0 { - action.Block = &wafv2.BlockAction{} + action.Block = expandWafv2BlockAction(v.([]interface{})) } return action @@ -808,11 +808,11 @@ func flattenWafv2DefaultAction(a *wafv2.DefaultAction) interface{} { m := map[string]interface{}{} if a.Allow != nil { - m["allow"] = make([]map[string]interface{}, 1) + m["allow"] = flattenWafv2Allow(a.Allow) } if a.Block != nil { - m["block"] = make([]map[string]interface{}, 1) + m["block"] = flattenWafv2Block(a.Block) } return []interface{}{m} diff --git a/aws/resource_aws_wafv2_web_acl_test.go b/aws/resource_aws_wafv2_web_acl_test.go index a8c29520c3c..7451f528b19 100644 --- a/aws/resource_aws_wafv2_web_acl_test.go +++ b/aws/resource_aws_wafv2_web_acl_test.go @@ -1134,6 +1134,174 @@ func TestAccAwsWafv2WebACL_RuleGroupReferenceStatement(t *testing.T) { }) } +func TestAccAwsWafv2WebACL_CustomRequestHandling(t *testing.T) { + var v wafv2.WebACL + webACLName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_wafv2_web_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSWafv2ScopeRegional(t) }, + ErrorCheck: testAccErrorCheck(t, wafv2.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsWafv2WebACLDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsWafv2WebACLConfig_CustomRequestHandling_Allow(webACLName, "x-hdr1", "x-hdr2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsWafv2WebACLExists(resourceName, &v), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "wafv2", regexp.MustCompile(`regional/webacl/.+$`)), + resource.TestCheckResourceAttr(resourceName, "name", webACLName), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.allow.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.#", "0"), + resource.TestCheckResourceAttr(resourceName, "scope", "REGIONAL"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "rule.*", map[string]string{ + "name": "rule-1", + "action.#": "1", + "action.0.allow.#": "1", + "action.0.allow.0.custom_request_handling.#": "1", + "action.0.allow.0.custom_request_handling.0.insert_header.#": "2", + "action.0.allow.0.custom_request_handling.0.insert_header.0.name": "x-hdr1", + "action.0.allow.0.custom_request_handling.0.insert_header.0.value": "test-value-1", + "action.0.allow.0.custom_request_handling.0.insert_header.1.name": "x-hdr2", + "action.0.allow.0.custom_request_handling.0.insert_header.1.value": "test-value-2", + "action.0.block.#": "0", + "action.0.count.#": "0", + "priority": "1", + }), + resource.TestCheckResourceAttr(resourceName, "visibility_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.cloudwatch_metrics_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.metric_name", "friendly-metric-name"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.sampled_requests_enabled", "false"), + ), + }, + { + Config: testAccAwsWafv2WebACLConfig_CustomRequestHandling_Count(webACLName, "x-hdr1", "x-hdr2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsWafv2WebACLExists(resourceName, &v), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "wafv2", regexp.MustCompile(`regional/webacl/.+$`)), + resource.TestCheckResourceAttr(resourceName, "name", webACLName), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.allow.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.#", "0"), + resource.TestCheckResourceAttr(resourceName, "scope", "REGIONAL"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "rule.*", map[string]string{ + "name": "rule-1", + "action.#": "1", + "action.0.allow.#": "0", + "action.0.block.#": "0", + "action.0.count.#": "1", + "action.0.count.0.custom_request_handling.#": "1", + "action.0.count.0.custom_request_handling.0.insert_header.#": "2", + "action.0.count.0.custom_request_handling.0.insert_header.0.name": "x-hdr1", + "action.0.count.0.custom_request_handling.0.insert_header.0.value": "test-value-1", + "action.0.count.0.custom_request_handling.0.insert_header.1.name": "x-hdr2", + "action.0.count.0.custom_request_handling.0.insert_header.1.value": "test-value-2", + "priority": "1", + }), + resource.TestCheckResourceAttr(resourceName, "visibility_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.cloudwatch_metrics_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.metric_name", "friendly-metric-name"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.sampled_requests_enabled", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateIdFunc: testAccAwsWafv2WebACLImportStateIdFunc(resourceName), + }, + }, + }) +} + +func TestAccAwsWafv2WebACL_CustomResponse(t *testing.T) { + var v wafv2.WebACL + webACLName := acctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_wafv2_web_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSWafv2ScopeRegional(t) }, + ErrorCheck: testAccErrorCheck(t, wafv2.EndpointsID), + Providers: testAccProviders, + CheckDestroy: testAccCheckAwsWafv2WebACLDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAwsWafv2WebACLConfig_CustomResponse(webACLName, 401, 403, "x-hdr1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsWafv2WebACLExists(resourceName, &v), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "wafv2", regexp.MustCompile(`regional/webacl/.+$`)), + resource.TestCheckResourceAttr(resourceName, "name", webACLName), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.allow.#", "0"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.0.custom_response.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.0.custom_response.0.response_code", "401"), + resource.TestCheckResourceAttr(resourceName, "scope", "REGIONAL"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "rule.*", map[string]string{ + "name": "rule-1", + "action.#": "1", + "action.0.allow.#": "0", + "action.0.block.#": "1", + "action.0.block.0.custom_response.#": "1", + "action.0.block.0.custom_response.0.response_code": "403", + "action.0.block.0.custom_response.0.response_header.#": "1", + "action.0.block.0.custom_response.0.response_header.0.name": "x-hdr1", + "action.0.block.0.custom_response.0.response_header.0.value": "custom-response-header-value", + "action.0.count.#": "0", + "priority": "1", + }), + resource.TestCheckResourceAttr(resourceName, "visibility_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.cloudwatch_metrics_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.metric_name", "friendly-metric-name"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.sampled_requests_enabled", "false"), + ), + }, + { + Config: testAccAwsWafv2WebACLConfig_CustomResponse(webACLName, 404, 429, "x-hdr2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckAwsWafv2WebACLExists(resourceName, &v), + testAccMatchResourceAttrRegionalARN(resourceName, "arn", "wafv2", regexp.MustCompile(`regional/webacl/.+$`)), + resource.TestCheckResourceAttr(resourceName, "name", webACLName), + resource.TestCheckResourceAttr(resourceName, "default_action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.allow.#", "0"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.0.custom_response.#", "1"), + resource.TestCheckResourceAttr(resourceName, "default_action.0.block.0.custom_response.0.response_code", "404"), + resource.TestCheckResourceAttr(resourceName, "scope", "REGIONAL"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "rule.*", map[string]string{ + "name": "rule-1", + "action.#": "1", + "action.0.allow.#": "0", + "action.0.block.#": "1", + "action.0.block.0.custom_response.#": "1", + "action.0.block.0.custom_response.0.response_code": "429", + "action.0.block.0.custom_response.0.response_header.#": "1", + "action.0.block.0.custom_response.0.response_header.0.name": "x-hdr2", + "action.0.block.0.custom_response.0.response_header.0.value": "custom-response-header-value", + "action.0.count.#": "0", + "priority": "1", + }), + resource.TestCheckResourceAttr(resourceName, "visibility_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.cloudwatch_metrics_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.metric_name", "friendly-metric-name"), + resource.TestCheckResourceAttr(resourceName, "visibility_config.0.sampled_requests_enabled", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateIdFunc: testAccAwsWafv2WebACLImportStateIdFunc(resourceName), + }, + }, + }) +} + func TestAccAwsWafv2WebACL_Tags(t *testing.T) { var v wafv2.WebACL webACLName := acctest.RandomWithPrefix("tf-acc-test") @@ -1538,6 +1706,166 @@ resource "aws_wafv2_web_acl" "test" { `, name, countryCodes) } +func testAccAwsWafv2WebACLConfig_CustomRequestHandling_Count(name, firstHeader string, secondHeader string) string { + return fmt.Sprintf(` +resource "aws_wafv2_web_acl" "test" { + name = "%[1]s" + description = "%[1]s" + scope = "REGIONAL" + + default_action { + allow {} + } + + rule { + name = "rule-1" + priority = 1 + + action { + count { + custom_request_handling { + insert_header { + name = "%[2]s" + value = "test-value-1" + } + + insert_header { + name = "%[3]s" + value = "test-value-2" + } + } + } + } + + statement { + geo_match_statement { + country_codes = ["US", "CA"] + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-rule-metric-name" + sampled_requests_enabled = false + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-metric-name" + sampled_requests_enabled = false + } +} +`, name, firstHeader, secondHeader) +} + +func testAccAwsWafv2WebACLConfig_CustomRequestHandling_Allow(name, firstHeader string, secondHeader string) string { + return fmt.Sprintf(` +resource "aws_wafv2_web_acl" "test" { + name = "%[1]s" + description = "%[1]s" + scope = "REGIONAL" + + default_action { + allow {} + } + + rule { + name = "rule-1" + priority = 1 + + action { + allow { + custom_request_handling { + insert_header { + name = "%[2]s" + value = "test-value-1" + } + + insert_header { + name = "%[3]s" + value = "test-value-2" + } + } + } + } + + statement { + geo_match_statement { + country_codes = ["US", "CA"] + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-rule-metric-name" + sampled_requests_enabled = false + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-metric-name" + sampled_requests_enabled = false + } +} +`, name, firstHeader, secondHeader) +} + +func testAccAwsWafv2WebACLConfig_CustomResponse(name string, defaultStatusCode int, countryBlockStatusCode int, countryHeaderName string) string { + return fmt.Sprintf(` +resource "aws_wafv2_web_acl" "test" { + name = "%[1]s" + description = "%[1]s" + scope = "REGIONAL" + + default_action { + block { + custom_response { + response_code = %[2]d + } + } + } + + rule { + name = "rule-1" + priority = 1 + + action { + block { + custom_response { + response_code = %[3]d + + response_header { + name = "%[4]s" + value = "custom-response-header-value" + } + } + } + } + + statement { + geo_match_statement { + country_codes = ["US", "CA"] + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-rule-metric-name" + sampled_requests_enabled = false + } + } + + visibility_config { + cloudwatch_metrics_enabled = false + metric_name = "friendly-metric-name" + sampled_requests_enabled = false + } +} +`, name, defaultStatusCode, countryBlockStatusCode, countryHeaderName) +} + func testAccAwsWafv2WebACLConfig_GeoMatchStatement_ForwardedIPConfig(name, fallbackBehavior, headerName string) string { return fmt.Sprintf(` resource "aws_wafv2_web_acl" "test" { diff --git a/aws/structure.go b/aws/structure.go index f515bceca76..eac526c4f6f 100644 --- a/aws/structure.go +++ b/aws/structure.go @@ -1635,45 +1635,6 @@ func flattenDSVpcSettings( return []map[string]interface{}{settings} } -func expandLambdaEventSourceMappingDestinationConfig(vDest []interface{}) *lambda.DestinationConfig { - if len(vDest) == 0 { - return nil - } - - dest := &lambda.DestinationConfig{} - onFailure := &lambda.OnFailure{} - - if len(vDest) > 0 { - if config, ok := vDest[0].(map[string]interface{}); ok { - if vOnFailure, ok := config["on_failure"].([]interface{}); ok && len(vOnFailure) > 0 && vOnFailure[0] != nil { - mOnFailure := vOnFailure[0].(map[string]interface{}) - onFailure.SetDestination(mOnFailure["destination_arn"].(string)) - } - } - } - dest.SetOnFailure(onFailure) - return dest -} - -func flattenLambdaEventSourceMappingDestinationConfig(dest *lambda.DestinationConfig) []interface{} { - mDest := map[string]interface{}{} - mOnFailure := map[string]interface{}{} - if dest != nil { - if dest.OnFailure != nil { - if dest.OnFailure.Destination != nil { - mOnFailure["destination_arn"] = *dest.OnFailure.Destination - mDest["on_failure"] = []interface{}{mOnFailure} - } - } - } - - if len(mDest) == 0 { - return nil - } - - return []interface{}{mDest} -} - func flattenLambdaLayers(layers []*lambda.Layer) []interface{} { arns := make([]*string, len(layers)) for i, layer := range layers { @@ -1919,6 +1880,10 @@ func expandCloudWatchLogMetricTransformations(m map[string]interface{}) []*cloud transformation.DefaultValue = aws.Float64(value) } + if dims := m["dimensions"].(map[string]interface{}); len(dims) > 0 { + transformation.Dimensions = expandStringMap(dims) + } + return []*cloudwatchlogs.MetricTransformation{&transformation} } @@ -1936,6 +1901,12 @@ func flattenCloudWatchLogMetricTransformations(ts []*cloudwatchlogs.MetricTransf m["default_value"] = strconv.FormatFloat(aws.Float64Value(ts[0].DefaultValue), 'f', -1, 64) } + if dims := ts[0].Dimensions; len(dims) > 0 { + m["dimensions"] = pointersMapToStringList(dims) + } else { + m["dimensions"] = nil + } + mts = append(mts, m) return mts diff --git a/aws/wafv2_helper.go b/aws/wafv2_helper.go index 32f12615529..28107047530 100644 --- a/aws/wafv2_helper.go +++ b/aws/wafv2_helper.go @@ -417,6 +417,117 @@ func wafv2VisibilityConfigSchema() *schema.Schema { } } +func wafv2AllowConfigSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_request_handling": wafv2CustomRequestHandlingSchema(), + }, + }, + } +} + +func wafv2CountConfigSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_request_handling": wafv2CustomRequestHandlingSchema(), + }, + }, + } +} + +func wafv2BlockConfigSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_response": wafv2CustomResponseSchema(), + }, + }, + } +} + +func wafv2CustomRequestHandlingSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "insert_header": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._$-]+$`), "must contain only alphanumeric, hyphen, underscore, dot and $ characters"), + ), + }, + "value": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + }, + }, + }, + }, + }, + } +} + +func wafv2CustomResponseSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "response_code": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(200, 600), + }, + "response_header": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(1, 64), + validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9._$-]+$`), "must contain only alphanumeric, hyphen, underscore, dot and $ characters"), + ), + }, + "value": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(1, 255), + }, + }, + }, + }, + }, + }, + } +} + func expandWafv2Rules(l []interface{}) []*wafv2.Rule { if len(l) == 0 || l[0] == nil { return nil @@ -457,20 +568,135 @@ func expandWafv2RuleAction(l []interface{}) *wafv2.RuleAction { action := &wafv2.RuleAction{} if v, ok := m["allow"]; ok && len(v.([]interface{})) > 0 { - action.Allow = &wafv2.AllowAction{} + action.Allow = expandWafv2AllowAction(v.([]interface{})) } if v, ok := m["block"]; ok && len(v.([]interface{})) > 0 { - action.Block = &wafv2.BlockAction{} + action.Block = expandWafv2BlockAction(v.([]interface{})) } if v, ok := m["count"]; ok && len(v.([]interface{})) > 0 { - action.Count = &wafv2.CountAction{} + action.Count = expandWafv2CountAction(v.([]interface{})) + } + + return action +} + +func expandWafv2AllowAction(l []interface{}) *wafv2.AllowAction { + action := &wafv2.AllowAction{} + + if len(l) == 0 || l[0] == nil { + return action + } + + m, ok := l[0].(map[string]interface{}) + if !ok { + return action + } + + if v, ok := m["custom_request_handling"].([]interface{}); ok && len(v) > 0 { + action.CustomRequestHandling = expandWafv2CustomRequestHandling(v) + } + + return action +} + +func expandWafv2CountAction(l []interface{}) *wafv2.CountAction { + action := &wafv2.CountAction{} + + if len(l) == 0 || l[0] == nil { + return action + } + + m, ok := l[0].(map[string]interface{}) + if !ok { + return action + } + + if v, ok := m["custom_request_handling"].([]interface{}); ok && len(v) > 0 { + action.CustomRequestHandling = expandWafv2CustomRequestHandling(v) } return action } +func expandWafv2BlockAction(l []interface{}) *wafv2.BlockAction { + action := &wafv2.BlockAction{} + + if len(l) == 0 || l[0] == nil { + return action + } + + m, ok := l[0].(map[string]interface{}) + if !ok { + return action + } + + if v, ok := m["custom_response"].([]interface{}); ok && len(v) > 0 { + action.CustomResponse = expandWafv2CustomResponse(v) + } + + return action +} + +func expandWafv2CustomResponse(l []interface{}) *wafv2.CustomResponse { + if len(l) == 0 || l[0] == nil { + return nil + } + + m, ok := l[0].(map[string]interface{}) + if !ok { + return nil + } + + customResponse := &wafv2.CustomResponse{} + + if v, ok := m["response_code"].(int); ok && v > 0 { + customResponse.ResponseCode = aws.Int64(int64(v)) + } + if v, ok := m["response_header"].(*schema.Set); ok && len(v.List()) > 0 { + customResponse.ResponseHeaders = expandWafv2CustomHeaders(v.List()) + } + + return customResponse +} + +func expandWafv2CustomRequestHandling(l []interface{}) *wafv2.CustomRequestHandling { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + requestHandling := &wafv2.CustomRequestHandling{} + + if v, ok := m["insert_header"].(*schema.Set); ok && len(v.List()) > 0 { + requestHandling.InsertHeaders = expandWafv2CustomHeaders(v.List()) + } + + return requestHandling +} + +func expandWafv2CustomHeaders(l []interface{}) []*wafv2.CustomHTTPHeader { + if len(l) == 0 || l[0] == nil { + return nil + } + + headers := make([]*wafv2.CustomHTTPHeader, 0) + + for _, header := range l { + if header == nil { + continue + } + m := header.(map[string]interface{}) + headers = append(headers, &wafv2.CustomHTTPHeader{ + Name: aws.String(m["name"].(string)), + Value: aws.String(m["value"].(string)), + }) + } + + return headers +} + func expandWafv2VisibilityConfig(l []interface{}) *wafv2.VisibilityConfig { if len(l) == 0 || l[0] == nil { return nil @@ -862,20 +1088,107 @@ func flattenWafv2RuleAction(a *wafv2.RuleAction) interface{} { m := map[string]interface{}{} if a.Allow != nil { - m["allow"] = make([]map[string]interface{}, 1) + m["allow"] = flattenWafv2Allow(a.Allow) } if a.Block != nil { - m["block"] = make([]map[string]interface{}, 1) + m["block"] = flattenWafv2Block(a.Block) } if a.Count != nil { - m["count"] = make([]map[string]interface{}, 1) + m["count"] = flattenWafv2Count(a.Count) + } + + return []interface{}{m} +} + +func flattenWafv2Allow(a *wafv2.AllowAction) []interface{} { + if a == nil { + return []interface{}{} + } + m := map[string]interface{}{} + + if a.CustomRequestHandling != nil { + m["custom_request_handling"] = flattenWafv2CustomRequestHandling(a.CustomRequestHandling) } return []interface{}{m} } +func flattenWafv2Block(a *wafv2.BlockAction) []interface{} { + if a == nil { + return []interface{}{} + } + + m := map[string]interface{}{} + + if a.CustomResponse != nil { + m["custom_response"] = flattenWafv2CustomResponse(a.CustomResponse) + } + + return []interface{}{m} +} + +func flattenWafv2Count(a *wafv2.CountAction) []interface{} { + if a == nil { + return []interface{}{} + } + m := map[string]interface{}{} + + if a.CustomRequestHandling != nil { + m["custom_request_handling"] = flattenWafv2CustomRequestHandling(a.CustomRequestHandling) + } + + return []interface{}{m} +} + +func flattenWafv2CustomRequestHandling(c *wafv2.CustomRequestHandling) []interface{} { + if c == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "insert_header": flattenWafv2CustomHeaders(c.InsertHeaders), + } + + return []interface{}{m} +} + +func flattenWafv2CustomResponse(r *wafv2.CustomResponse) []interface{} { + if r == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "response_code": int(aws.Int64Value(r.ResponseCode)), + "response_header": flattenWafv2CustomHeaders(r.ResponseHeaders), + } + + return []interface{}{m} +} + +func flattenWafv2CustomHeaders(h []*wafv2.CustomHTTPHeader) []interface{} { + out := make([]interface{}, len(h)) + for i, header := range h { + out[i] = flattenWafv2CustomHeader(header) + } + + return out +} + +func flattenWafv2CustomHeader(h *wafv2.CustomHTTPHeader) map[string]interface{} { + if h == nil { + return map[string]interface{}{} + } + + m := map[string]interface{}{ + "name": aws.StringValue(h.Name), + "value": aws.StringValue(h.Value), + } + + return m +} + func flattenWafv2RootStatement(s *wafv2.Statement) interface{} { if s == nil { return []interface{}{} diff --git a/awsproviderlint/go.mod b/awsproviderlint/go.mod index 1ef8ab220aa..9ef40ef59bc 100644 --- a/awsproviderlint/go.mod +++ b/awsproviderlint/go.mod @@ -3,7 +3,7 @@ module github.com/terraform-providers/terraform-provider-aws/awsproviderlint go 1.16 require ( - github.com/aws/aws-sdk-go v1.38.42 + github.com/aws/aws-sdk-go v1.38.53 github.com/bflad/tfproviderlint v0.26.0 github.com/hashicorp/terraform-plugin-sdk/v2 v2.6.1 golang.org/x/tools v0.0.0-20201028111035-eafbe7b904eb diff --git a/awsproviderlint/go.sum b/awsproviderlint/go.sum index 7cad4d483ac..bbdc129029c 100644 --- a/awsproviderlint/go.sum +++ b/awsproviderlint/go.sum @@ -66,8 +66,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= github.com/aws/aws-sdk-go v1.25.3/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.37.0/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= -github.com/aws/aws-sdk-go v1.38.42 h1:94blpbGDe2q5e0Xoop7131uzI2CH2qitQoptSMrkJP8= -github.com/aws/aws-sdk-go v1.38.42/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= +github.com/aws/aws-sdk-go v1.38.53 h1:Qj5OvKPrDGTiCnWj+kwQXAlBO6OaFBH/WaRzJPZPg3w= +github.com/aws/aws-sdk-go v1.38.53/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/bflad/gopaniccheck v0.1.0 h1:tJftp+bv42ouERmUMWLoUn/5bi/iQZjHPznM00cP/bU= github.com/bflad/gopaniccheck v0.1.0/go.mod h1:ZCj2vSr7EqVeDaqVsWN4n2MwdROx1YL+LFo47TSWtsA= github.com/bflad/tfproviderlint v0.26.0 h1:Xd+hbVlSQhKlXifpqmHPvlcnOK1lRS4IZf+cXBAUpCs= diff --git a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 060bff6c82b..9d1266d5d4c 100644 --- a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -837,6 +837,16 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "apprunner": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "appstream2": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -2857,6 +2867,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -3176,9 +3187,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "forecast-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "forecast-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "forecast-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "forecastquery": service{ @@ -3191,9 +3220,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "forecastquery-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "forecastquery-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "forecastquery-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "fsx": service{ @@ -4084,6 +4131,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, @@ -5059,6 +5107,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-2": endpoint{}, "us-east-1": endpoint{}, @@ -5086,9 +5135,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "qldb-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "qldb-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "qldb-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "ram": service{ @@ -5405,6 +5472,7 @@ var awsPartition = partition{ "ap-east-1": endpoint{}, "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-northeast-3": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, @@ -6106,6 +6174,61 @@ var awsPartition = partition{ }, }, }, + "servicecatalog-appregistry": service{ + + Endpoints: endpoints{ + "af-south-1": endpoint{}, + "ap-east-1": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-north-1": endpoint{}, + "eu-south-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "fips-ca-central-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "ca-central-1", + }, + }, + "fips-us-east-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-1": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-1", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "servicecatalog-appregistry-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "me-south-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "servicediscovery": service{ Endpoints: endpoints{ @@ -6174,9 +6297,27 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "fips-us-east-1": endpoint{ + Hostname: "session.qldb-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + }, + "fips-us-east-2": endpoint{ + Hostname: "session.qldb-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-2", + }, + }, + "fips-us-west-2": endpoint{ + Hostname: "session.qldb-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + }, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, }, }, "shield": service{ @@ -9813,6 +9954,25 @@ var awsusgovPartition = partition{ }, }, }, + "servicecatalog-appregistry": service{ + + Endpoints: endpoints{ + "fips-us-gov-east-1": endpoint{ + Hostname: "servicecatalog-appregistry.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + }, + "fips-us-gov-west-1": endpoint{ + Hostname: "servicecatalog-appregistry.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, "servicequotas": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -10452,6 +10612,12 @@ var awsisoPartition = partition{ "us-iso-east-1": endpoint{}, }, }, + "ram": service{ + + Endpoints: endpoints{ + "us-iso-east-1": endpoint{}, + }, + }, "rds": service{ Endpoints: endpoints{ diff --git a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/request/request.go index d597c6ead55..fb0a68fce3e 100644 --- a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/request/request.go +++ b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -129,12 +129,27 @@ func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers, httpReq, _ := http.NewRequest(method, "", nil) var err error - httpReq.URL, err = url.Parse(clientInfo.Endpoint + operation.HTTPPath) + httpReq.URL, err = url.Parse(clientInfo.Endpoint) if err != nil { httpReq.URL = &url.URL{} err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err) } + if len(operation.HTTPPath) != 0 { + opHTTPPath := operation.HTTPPath + var opQueryString string + if idx := strings.Index(opHTTPPath, "?"); idx >= 0 { + opQueryString = opHTTPPath[idx+1:] + opHTTPPath = opHTTPPath[:idx] + } + + if strings.HasSuffix(httpReq.URL.Path, "/") && strings.HasPrefix(opHTTPPath, "/") { + opHTTPPath = opHTTPPath[1:] + } + httpReq.URL.Path += opHTTPPath + httpReq.URL.RawQuery = opQueryString + } + r := &Request{ Config: cfg, ClientInfo: clientInfo, diff --git a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/version.go b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/version.go index 7573425b44d..7d0c72aea7b 100644 --- a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.38.42" +const SDKVersion = "1.38.53" diff --git a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/service/s3/api.go index 6d15bad28f7..ebdd5f616e9 100644 --- a/awsproviderlint/vendor/github.com/aws/aws-sdk-go/service/s3/api.go +++ b/awsproviderlint/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -356,9 +356,8 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou // use the s3:x-amz-metadata-directive condition key to enforce certain metadata // behavior when objects are uploaded. For more information, see Specifying // Conditions in a Policy (https://docs.aws.amazon.com/AmazonS3/latest/dev/amazon-s3-policy-keys.html) -// in the Amazon S3 Developer Guide. For a complete list of Amazon S3-specific -// condition keys, see Actions, Resources, and Condition Keys for Amazon S3 -// (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html). +// in the Amazon S3 User Guide. For a complete list of Amazon S3-specific condition +// keys, see Actions, Resources, and Condition Keys for Amazon S3 (https://docs.aws.amazon.com/AmazonS3/latest/dev/list_amazons3.html). // // x-amz-copy-source-if Headers // @@ -422,7 +421,7 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou // You can use the CopyObject action to change the storage class of an object // that is already stored in Amazon S3 using the StorageClass parameter. For // more information, see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) -// in the Amazon S3 Service Developer Guide. +// in the Amazon S3 User Guide. // // Versioning // @@ -535,7 +534,7 @@ func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request // become the bucket owner. // // Not every string is an acceptable bucket name. For information about bucket -// naming restrictions, see Working with Amazon S3 buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html). +// naming restrictions, see Bucket naming rules (https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). // // If you want to create an Amazon S3 on Outposts bucket, see Create Bucket // (https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateBucket.html). @@ -723,10 +722,11 @@ func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (re // by using CreateMultipartUpload. // // To perform a multipart upload with encryption using an AWS KMS CMK, the requester -// must have permission to the kms:Encrypt, kms:Decrypt, kms:ReEncrypt*, kms:GenerateDataKey*, -// and kms:DescribeKey actions on the key. These permissions are required because -// Amazon S3 must decrypt and read data from the encrypted file parts before -// it completes the multipart upload. +// must have permission to the kms:Decrypt and kms:GenerateDataKey* actions +// on the key. These permissions are required because Amazon S3 must decrypt +// and read data from the encrypted file parts before it completes the multipart +// upload. For more information, see Multipart upload API and permissions (https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpuAndPermissions) +// in the Amazon S3 User Guide. // // If your AWS Identity and Access Management (IAM) user or role is in the same // AWS account as the AWS KMS CMK, then you must have these permissions on the @@ -1835,7 +1835,7 @@ func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) // propagate. // // For information about replication configuration, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // The following operations are related to DeleteBucketReplication: // @@ -6497,12 +6497,13 @@ func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Reque // ListObjectsV2 API operation for Amazon Simple Storage Service. // -// Returns some or all (up to 1,000) of the objects in a bucket. You can use -// the request parameters as selection criteria to return a subset of the objects -// in a bucket. A 200 OK response can contain valid or invalid XML. Make sure -// to design your application to parse the contents of the response and handle -// it appropriately. Objects are returned sorted in an ascending order of the -// respective key names in the list. +// Returns some or all (up to 1,000) of the objects in a bucket with each request. +// You can use the request parameters as selection criteria to return a subset +// of the objects in a bucket. A 200 OK response can contain valid or invalid +// XML. Make sure to design your application to parse the contents of the response +// and handle it appropriately. Objects are returned sorted in an ascending +// order of the respective key names in the list. For more information about +// listing objects, see Listing object keys programmatically (https://docs.aws.amazon.com/AmazonS3/latest/userguide/ListingKeysUsingAPIs.html) // // To use this operation, you must have READ access to the bucket. // @@ -7816,7 +7817,7 @@ func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleCon // // Creates a new lifecycle configuration for the bucket or replaces an existing // lifecycle configuration. For information about lifecycle configuration, see -// Managing Access Permissions to Your Amazon S3 Resources (https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html). +// Managing your storage lifecycle (https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html). // // Bucket lifecycle configuration now supports specifying a lifecycle rule using // an object key name prefix, one or more object tags, or a combination of both. @@ -8587,7 +8588,7 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req // // Creates a replication configuration or replaces an existing one. For more // information, see Replication (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // To perform this operation, the user or role performing the action must have // the iam:PassRole (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) @@ -9229,7 +9230,7 @@ func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, outp // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) -// in the Amazon S3 Service Developer Guide. +// in the Amazon S3 User Guide. // // Versioning // @@ -9339,7 +9340,7 @@ func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request // have an existing application that updates a bucket ACL using the request // body, you can continue to use that approach. For more information, see Access // Control List (ACL) Overview (https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. // // Access Permissions // @@ -10997,7 +10998,7 @@ type AbortMultipartUploadInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -11025,7 +11026,7 @@ type AbortMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Upload ID that identifies the multipart upload. @@ -11242,7 +11243,7 @@ type AccessControlTranslation struct { // Specifies the replica ownership. For default and valid values, see PUT bucket // replication (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. // // Owner is a required field Owner *string `type:"string" required:"true" enum:"OwnerOverride"` @@ -11693,7 +11694,7 @@ type BucketLoggingStatus struct { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -12168,7 +12169,7 @@ type CompleteMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // ID for the initiated multipart upload. @@ -12291,7 +12292,7 @@ type CompleteMultipartUploadOutput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -12577,7 +12578,7 @@ type CopyObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -12735,7 +12736,7 @@ type CopyObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -12764,7 +12765,7 @@ type CopyObjectInput struct { // or using SigV4. For information about configuring using any of the officially // supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request // Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -12776,7 +12777,7 @@ type CopyObjectInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object destination object this value must be used in @@ -13358,7 +13359,10 @@ type CreateBucketInput struct { // Allows grantee to read the bucket ACL. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -13494,7 +13498,7 @@ type CreateMultipartUploadInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -13583,7 +13587,7 @@ type CreateMultipartUploadInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -13612,7 +13616,7 @@ type CreateMultipartUploadInput struct { // KMS will fail if not made via SSL or using SigV4. For information about configuring // using any of the officially supported AWS SDKs and AWS CLI, see Specifying // the Signature Version in Request Authentication (https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#specify-signature-version) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -13624,7 +13628,7 @@ type CreateMultipartUploadInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object. The tag-set must be encoded as URL Query parameters. @@ -13908,7 +13912,7 @@ type CreateMultipartUploadOutput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15613,7 +15617,7 @@ type DeleteObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15651,7 +15655,7 @@ type DeleteObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -15819,7 +15823,7 @@ type DeleteObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -15970,7 +15974,7 @@ type DeleteObjectsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -16009,7 +16013,7 @@ type DeleteObjectsInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` } @@ -16333,7 +16337,7 @@ type Destination struct { // the destination bucket by specifying the AccessControlTranslation property, // this is the account ID of the destination bucket owner. For more information, // see Replication Additional Configuration: Changing the Replica Owner (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-change-owner.html) - // in the Amazon Simple Storage Service Developer Guide. + // in the Amazon S3 User Guide. Account *string `type:"string"` // The Amazon Resource Name (ARN) of the bucket where you want Amazon S3 to @@ -16361,7 +16365,7 @@ type Destination struct { // // For valid values, see the StorageClass element of the PUT Bucket replication // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTreplication.html) - // action in the Amazon Simple Storage Service API Reference. + // action in the Amazon S3 API Reference. StorageClass *string `type:"string" enum:"StorageClass"` } @@ -16468,8 +16472,8 @@ type Encryption struct { // If the encryption type is aws:kms, this optional value specifies the ID of // the symmetric customer managed AWS KMS CMK to use for encryption of job results. - // Amazon S3 only supports symmetric CMKs. For more information, see Using Symmetric - // and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Amazon S3 only supports symmetric CMKs. For more information, see Using symmetric + // and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. KMSKeyId *string `type:"string" sensitive:"true"` } @@ -16520,11 +16524,11 @@ func (s *Encryption) SetKMSKeyId(v string) *Encryption { type EncryptionConfiguration struct { _ struct{} `type:"structure"` - // Specifies the ID (Key ARN or Alias ARN) of the customer managed customer - // master key (CMK) stored in AWS Key Management Service (KMS) for the destination - // bucket. Amazon S3 uses this key to encrypt replica objects. Amazon S3 only - // supports symmetric customer managed CMKs. For more information, see Using - // Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Specifies the ID (Key ARN or Alias ARN) of the customer managed AWS KMS key + // stored in AWS Key Management Service (KMS) for the destination bucket. Amazon + // S3 uses this key to encrypt replica objects. Amazon S3 only supports symmetric, + // customer managed KMS keys. For more information, see Using symmetric and + // asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. ReplicaKmsKeyID *string `type:"string"` } @@ -17035,7 +17039,7 @@ func (s *ErrorDocument) SetKey(v string) *ErrorDocument { // Optional configuration to replicate existing source bucket objects. For more // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. type ExistingObjectReplication struct { _ struct{} `type:"structure"` @@ -18337,7 +18341,7 @@ type GetBucketLoggingOutput struct { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) - // in the Amazon Simple Storage Service API Reference. + // in the Amazon S3 API Reference. LoggingEnabled *LoggingEnabled `type:"structure"` } @@ -19490,7 +19494,7 @@ type GetObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -19510,7 +19514,7 @@ type GetObjectAclInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -19664,7 +19668,7 @@ type GetObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -19720,7 +19724,7 @@ type GetObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Sets the Cache-Control header of the response. @@ -19964,7 +19968,7 @@ type GetObjectLegalHoldInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -19984,7 +19988,7 @@ type GetObjectLegalHoldInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID of the object whose Legal Hold status you want to retrieve. @@ -20119,7 +20123,7 @@ type GetObjectLockConfigurationInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -20567,7 +20571,7 @@ type GetObjectRetentionInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -20587,7 +20591,7 @@ type GetObjectRetentionInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID for the object whose retention settings you want to retrieve. @@ -20722,7 +20726,7 @@ type GetObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -20750,7 +20754,7 @@ type GetObjectTaggingInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The versionId of the object for which to get the tagging information. @@ -20910,7 +20914,7 @@ type GetObjectTorrentInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` } @@ -21342,7 +21346,7 @@ type HeadBucketInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -21457,7 +21461,7 @@ type HeadObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -21514,7 +21518,7 @@ type HeadObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -22417,7 +22421,7 @@ func (s *IntelligentTieringFilter) SetTag(v *Tag) *IntelligentTieringFilter { // Specifies the inventory configuration for an Amazon S3 bucket. For more information, // see GET Bucket inventory (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETInventoryConfig.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type InventoryConfiguration struct { _ struct{} `type:"structure"` @@ -23987,7 +23991,7 @@ type ListMultipartUploadsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -24627,7 +24631,7 @@ type ListObjectsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -24921,7 +24925,7 @@ type ListObjectsV2Input struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25157,7 +25161,7 @@ type ListObjectsV2Output struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25273,7 +25277,7 @@ type ListPartsInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -25308,7 +25312,7 @@ type ListPartsInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Upload ID identifying the multipart upload whose parts are being listed. @@ -25730,7 +25734,7 @@ func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location { // Describes where logs are stored and the prefix that Amazon S3 assigns to // all log object keys for a bucket. For more information, see PUT Bucket logging // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlogging.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type LoggingEnabled struct { _ struct{} `type:"structure"` @@ -25953,7 +25957,7 @@ func (s *MetricsAndOperator) SetTags(v []*Tag) *MetricsAndOperator { // the existing metrics configuration. If you don't include the elements you // want to keep, they are erased. For more information, see PUT Bucket metrics // (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type MetricsConfiguration struct { _ struct{} `type:"structure"` @@ -26155,7 +26159,7 @@ type NoncurrentVersionExpiration struct { // perform the associated action. For information about the noncurrent days // calculations, see How Amazon S3 Calculates When an Object Became Noncurrent // (https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html#non-current-days-calculations) - // in the Amazon Simple Storage Service Developer Guide. + // in the Amazon S3 User Guide. NoncurrentDays *int64 `type:"integer"` } @@ -27336,7 +27340,10 @@ type PutBucketAclInput struct { // Allows grantee to read the bucket ACL. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -29693,7 +29700,7 @@ type PutObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -29720,7 +29727,10 @@ type PutObjectAclInput struct { // This action is not supported by Amazon S3 on Outposts. GrantReadACP *string `location:"header" locationName:"x-amz-grant-read-acp" type:"string"` - // Allows grantee to create, overwrite, and delete any object in the bucket. + // Allows grantee to create new objects in the bucket. + // + // For the bucket and object owners of existing objects, also allows deletions + // and overwrites of those objects. GrantWrite *string `location:"header" locationName:"x-amz-grant-write" type:"string"` // Allows grantee to write the ACL for the applicable bucket. @@ -29734,7 +29744,7 @@ type PutObjectAclInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -29752,7 +29762,7 @@ type PutObjectAclInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // VersionId used to reference a specific version of the object. @@ -29944,7 +29954,7 @@ type PutObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -30053,7 +30063,7 @@ type PutObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -30080,13 +30090,11 @@ type PutObjectInput struct { // If x-amz-server-side-encryption is present and has the value of aws:kms, // this header specifies the ID of the AWS Key Management Service (AWS KMS) // symmetrical customer managed customer master key (CMK) that was used for - // the object. - // - // If the value of x-amz-server-side-encryption is aws:kms, this header specifies - // the ID of the symmetric customer managed AWS KMS CMK that will be used for // the object. If you specify x-amz-server-side-encryption:aws:kms, but do not // providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses the AWS - // managed CMK in AWS to protect the data. + // managed CMK in AWS to protect the data. If the KMS key does not exist in + // the same account issuing the command, you must use the full ARN and not just + // the ID. SSEKMSKeyId *string `location:"header" locationName:"x-amz-server-side-encryption-aws-kms-key-id" type:"string" sensitive:"true"` // The server-side encryption algorithm used when storing this object in Amazon @@ -30098,7 +30106,7 @@ type PutObjectInput struct { // Depending on performance needs, you can specify a different Storage Class. // Amazon S3 on Outposts only uses the OUTPOSTS Storage Class. For more information, // see Storage Classes (https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) - // in the Amazon S3 Service Developer Guide. + // in the Amazon S3 User Guide. StorageClass *string `location:"header" locationName:"x-amz-storage-class" type:"string" enum:"StorageClass"` // The tag-set for the object. The tag-set must be encoded as URL Query parameters. @@ -30401,7 +30409,7 @@ type PutObjectLegalHoldInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -30425,7 +30433,7 @@ type PutObjectLegalHoldInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The version ID of the object that you want to place a Legal Hold on. @@ -30578,7 +30586,7 @@ type PutObjectLockConfigurationInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // A token to allow Object Lock to be enabled for an existing bucket. @@ -30831,7 +30839,7 @@ type PutObjectRetentionInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // Bucket is a required field @@ -30855,7 +30863,7 @@ type PutObjectRetentionInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // The container element for the Object Retention configuration. @@ -31007,7 +31015,7 @@ type PutObjectTaggingInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -31035,7 +31043,7 @@ type PutObjectTaggingInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Container for the TagSet and Tag elements @@ -31752,7 +31760,7 @@ type ReplicationRule struct { // Optional configuration to replicate existing source bucket objects. For more // information, see Replicating Existing Objects (https://docs.aws.amazon.com/AmazonS3/latest/dev/replication-what-is-isnot-replicated.html#existing-object-replication) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. ExistingObjectReplication *ExistingObjectReplication `type:"structure"` // A filter that identifies the subset of objects to which the replication rule @@ -32195,7 +32203,7 @@ type RestoreObjectInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -32223,7 +32231,7 @@ type RestoreObjectInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Container for restore job parameters. @@ -32540,8 +32548,8 @@ func (s *RoutingRule) SetRedirect(v *Redirect) *RoutingRule { // Specifies lifecycle rules for an Amazon S3 bucket. For more information, // see Put Bucket Lifecycle Configuration (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTlifecycle.html) -// in the Amazon Simple Storage Service API Reference. For examples, see Put -// Bucket Lifecycle Configuration Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples). +// in the Amazon S3 API Reference. For examples, see Put Bucket Lifecycle Configuration +// Examples (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html#API_PutBucketLifecycleConfiguration_Examples). type Rule struct { _ struct{} `type:"structure"` @@ -33287,17 +33295,17 @@ func (s *SelectParameters) SetOutputSerialization(v *OutputSerialization) *Selec // bucket. If a PUT Object request doesn't specify any server-side encryption, // this default encryption will be applied. For more information, see PUT Bucket // encryption (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type ServerSideEncryptionByDefault struct { _ struct{} `type:"structure"` - // AWS Key Management Service (KMS) customer master key ID to use for the default + // AWS Key Management Service (KMS) customer AWS KMS key ID to use for the default // encryption. This parameter is allowed if and only if SSEAlgorithm is set // to aws:kms. // - // You can specify the key ID or the Amazon Resource Name (ARN) of the CMK. + // You can specify the key ID or the Amazon Resource Name (ARN) of the KMS key. // However, if you are using encryption with cross-account operations, you must - // use a fully qualified CMK ARN. For more information, see Using encryption + // use a fully qualified KMS key ARN. For more information, see Using encryption // for cross-account operations (https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html#bucket-encryption-update-bucket-policy). // // For example: @@ -33306,8 +33314,8 @@ type ServerSideEncryptionByDefault struct { // // * Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab // - // Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more - // information, see Using Symmetric and Asymmetric Keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) + // Amazon S3 only supports symmetric KMS keys and not asymmetric KMS keys. For + // more information, see Using symmetric and asymmetric keys (https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) // in the AWS Key Management Service Developer Guide. KMSMasterKeyID *string `type:"string" sensitive:"true"` @@ -33531,7 +33539,7 @@ type SseKmsEncryptedObjects struct { _ struct{} `type:"structure"` // Specifies whether Amazon S3 replicates objects created with server-side encryption - // using a customer master key (CMK) stored in AWS Key Management Service. + // using an AWS KMS key stored in AWS Key Management Service. // // Status is a required field Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` @@ -34170,7 +34178,7 @@ type UploadPartCopyInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -34275,7 +34283,7 @@ type UploadPartCopyInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -34612,7 +34620,7 @@ type UploadPartInput struct { // the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. // When using this action with an access point through the AWS SDKs, you provide // the access point ARN in place of the bucket name. For more information about - // access point ARNs, see Using Access Points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) + // access point ARNs, see Using access points (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-access-points.html) // in the Amazon S3 User Guide. // // When using this action with Amazon S3 on Outposts, you must direct requests @@ -34655,7 +34663,7 @@ type UploadPartInput struct { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) - // in the Amazon S3 Developer Guide. + // in the Amazon S3 User Guide. RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` // Specifies the algorithm to use to when encrypting the object (for example, @@ -34919,7 +34927,7 @@ func (s *UploadPartOutput) SetServerSideEncryption(v string) *UploadPartOutput { // Describes the versioning state of an Amazon S3 bucket. For more information, // see PUT Bucket versioning (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTVersioningStatus.html) -// in the Amazon Simple Storage Service API Reference. +// in the Amazon S3 API Reference. type VersioningConfiguration struct { _ struct{} `type:"structure"` @@ -36477,7 +36485,7 @@ func RequestCharged_Values() []string { // Bucket owners need not specify this parameter in their requests. For information // about downloading objects from requester pays buckets, see Downloading Objects // in Requestor Pays Buckets (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html) -// in the Amazon S3 Developer Guide. +// in the Amazon S3 User Guide. const ( // RequestPayerRequester is a RequestPayer enum value RequestPayerRequester = "requester" diff --git a/awsproviderlint/vendor/modules.txt b/awsproviderlint/vendor/modules.txt index 0c9348e184c..a1510d3c78d 100644 --- a/awsproviderlint/vendor/modules.txt +++ b/awsproviderlint/vendor/modules.txt @@ -14,7 +14,7 @@ github.com/agext/levenshtein github.com/apparentlymart/go-textseg/v12/textseg # github.com/apparentlymart/go-textseg/v13 v13.0.0 github.com/apparentlymart/go-textseg/v13/textseg -# github.com/aws/aws-sdk-go v1.38.42 +# github.com/aws/aws-sdk-go v1.38.53 ## explicit github.com/aws/aws-sdk-go/aws github.com/aws/aws-sdk-go/aws/arn diff --git a/docs/MAINTAINING.md b/docs/MAINTAINING.md index 1685b5f90d8..bb36731efe7 100644 --- a/docs/MAINTAINING.md +++ b/docs/MAINTAINING.md @@ -342,6 +342,9 @@ Environment variables (beyond standard AWS Go SDK ones) used by acceptance testi | `ACM_CERTIFICATE_SINGLE_ISSUED_DOMAIN` | Domain name of ACM Certificate with a single issued certificate. **DEPRECATED:** Should be replaced with `aws_acm_certficate` resource usage in tests. | | `ACM_CERTIFICATE_SINGLE_ISSUED_MOST_RECENT_ARN` | Amazon Resource Name of most recent ACM Certificate with a single issued certificate. **DEPRECATED:** Should be replaced with `aws_acm_certficate` resource usage in tests. | | `ADM_CLIENT_ID` | Identifier for Amazon Device Manager Client in Pinpoint testing. | +| `AMPLIFY_DOMAIN_NAME` | Domain name to use for Amplify domain association testing. | +| `AMPLIFY_GITHUB_ACCESS_TOKEN` | GitHub access token used for AWS Amplify testing. | +| `AMPLIFY_GITHUB_REPOSITORY` | GitHub repository used for AWS Amplify testing. | | `ADM_CLIENT_SECRET` | Secret for Amazon Device Manager Client in Pinpoint testing. | | `APNS_BUNDLE_ID` | Identifier for Apple Push Notification Service Bundle in Pinpoint testing. | | `APNS_CERTIFICATE` | Certificate (PEM format) for Apple Push Notification Service in Pinpoint testing. | diff --git a/docs/contributing/contribution-checklists.md b/docs/contributing/contribution-checklists.md index 1e241b8bdc2..5c50ea6ec66 100644 --- a/docs/contributing/contribution-checklists.md +++ b/docs/contributing/contribution-checklists.md @@ -623,34 +623,24 @@ into Terraform. - In `website/docs/guides/custom-service-endpoints.html.md`: Add the service name in the list of customizable endpoints. - In `infrastructure/repository/labels-service.tf`: Add the new service to create a repository label. - - In `.hashibot.hcl`: Add the new service to automated issue and pull request labeling. e.g. with the `quicksight` service - - ```hcl - behavior "regexp_issue_labeler_v2" "service_labels" { - # ... other configuration ... - - label_map = { - # ... other services ... - "service/quicksight" = [ - "aws_quicksight_", - ], - # ... other services ... - } - } + - In `.github/labeler-issue-triage.yml`: Add the new service to automated issue labeling. e.g. with the `quicksight` service - behavior "pull_request_path_labeler" "service_labels" - # ... other configuration ... - - label_map = { - # ... other services ... - "service/quicksight" = [ - "aws/internal/service/quicksight/**/*", - "**/*_quicksight_*", - "**/quicksight_*", - ], - # ... other services ... - } - } + ```yaml + # ... other services ... + service/quicksight: + - '((\*|-) ?`?|(data|resource) "?)aws_quicksight_' + # ... other services ... + ``` + + - In `.github/labeler-pr-triage.yml`: Add the new service to automated pull request labeling. e.g. with the `quicksight` service + + ```yaml + # ... other services ... + service/quicksight: + - 'aws/internal/service/quicksight/**/*' + - '**/*_quicksight_*' + - '**/quicksight_*' + # ... other services ... ``` - Run the following then submit the pull request: diff --git a/go.mod b/go.mod index c83c116f129..dd113710b11 100644 --- a/go.mod +++ b/go.mod @@ -3,7 +3,7 @@ module github.com/terraform-providers/terraform-provider-aws go 1.16 require ( - github.com/aws/aws-sdk-go v1.38.43 + github.com/aws/aws-sdk-go v1.38.53 github.com/beevik/etree v1.1.0 github.com/fatih/color v1.9.0 // indirect github.com/hashicorp/aws-sdk-go-base v0.7.1 diff --git a/go.sum b/go.sum index b43ef678a68..eee67ded9f4 100644 --- a/go.sum +++ b/go.sum @@ -78,8 +78,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= github.com/aws/aws-sdk-go v1.25.3/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.31.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= -github.com/aws/aws-sdk-go v1.38.43 h1:OKe9+Cdmrkhe0KXgpKhrDqidPhXQ4bv1FzzKnrmTJ5g= -github.com/aws/aws-sdk-go v1.38.43/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= +github.com/aws/aws-sdk-go v1.38.53 h1:Qj5OvKPrDGTiCnWj+kwQXAlBO6OaFBH/WaRzJPZPg3w= +github.com/aws/aws-sdk-go v1.38.53/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs= github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A= github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d h1:xDfNPAt8lFiC1UJrqV3uuy861HCTo708pDMbjHHdCas= diff --git a/infrastructure/repository/labels-service.tf b/infrastructure/repository/labels-service.tf index 753b05e32cc..b63fdee1ee2 100644 --- a/infrastructure/repository/labels-service.tf +++ b/infrastructure/repository/labels-service.tf @@ -125,6 +125,7 @@ variable "service_labels" { "lexmodelbuildingservice", "licensemanager", "lightsail", + "location", "machinelearning", "macie", "macie2", @@ -172,6 +173,7 @@ variable "service_labels" { "s3outposts", "sagemaker", "savingsplans", + "schemas", "secretsmanager", "securityhub", "serverlessapplicationrepository", diff --git a/infrastructure/repository/main.tf b/infrastructure/repository/main.tf index e00ed481a6b..6e3714578d5 100644 --- a/infrastructure/repository/main.tf +++ b/infrastructure/repository/main.tf @@ -10,7 +10,7 @@ terraform { required_providers { github = { source = "hashicorp/github" - version = "3.1.0" + version = "4.10.1" } } diff --git a/website/allowed-subcategories.txt b/website/allowed-subcategories.txt index fbeef224590..1a9b7232940 100644 --- a/website/allowed-subcategories.txt +++ b/website/allowed-subcategories.txt @@ -4,6 +4,7 @@ ACM API Gateway (REST APIs) API Gateway v2 (WebSocket and HTTP APIs) Access Analyzer +Amplify Console AppConfig AppMesh App Runner @@ -17,6 +18,7 @@ Amazon Managed Service for Prometheus (AMP) Backup Batch Budgets +Chime Cloud9 CloudFormation CloudFront @@ -60,6 +62,7 @@ Elastic Map Reduce Containers Elastic Transcoder ElasticSearch EventBridge (CloudWatch Events) +EventBridge Schemas File System (FSx) Firewall Manager (FMS) Gamelift @@ -83,6 +86,7 @@ Lambda Lex License Manager Lightsail +Location Service MQ Macie Macie Classic diff --git a/website/docs/d/cloudwatch_event_connection.html.markdown b/website/docs/d/cloudwatch_event_connection.html.markdown new file mode 100644 index 00000000000..0b6bfeba1fe --- /dev/null +++ b/website/docs/d/cloudwatch_event_connection.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "EventBridge (CloudWatch Events)" +layout: "aws" +page_title: "AWS: aws_cloudwatch_event_connection" +description: |- + Provides an EventBridge connection data source. +--- + +# Data source: aws_cloudfront_distribution + +Use this data source to retrieve information about a EventBridge connection. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + + +## Example Usage + +```terraform +data "aws_cloudwatch_event_connection" "test" { + name = "test" +} +``` + +## Argument Reference + +* `name` - The name of the connection. + +## Attributes Reference + +The following attributes are exported: + +* `name` - The name of the connection. + +* `arn` - The ARN (Amazon Resource Name) for the connection. + +* `secret_arn` - The ARN (Amazon Resource Name) for the secret created from the authorization parameters specified for the connection. + +* `authorization_type` - The type of authorization to use to connect. One of `API_KEY`,`BASIC`,`OAUTH_CLIENT_CREDENTIALS`. diff --git a/website/docs/d/default_tags.markdown b/website/docs/d/default_tags.markdown new file mode 100644 index 00000000000..ea38cfcafaf --- /dev/null +++ b/website/docs/d/default_tags.markdown @@ -0,0 +1,63 @@ +--- +subcategory: "" +layout: "aws" +page_title: "AWS: aws_default_tags" +description: |- + Access the default tags configured on the provider. +--- + +# Data Source: aws_default_Tags + +Use this data source to get the default tags configured on the provider. + +With this data source, you can apply default tags to resources not _directly_ managed by a Terraform resource, such as the instances underneath an Auto Scaling group or the volumes created for an EC2 instance. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_default_tags" "example" {} +``` + +### Dynamically Apply Default Tags to Auto Scaling Group + +```terraform +provider "aws" { + default_tags { + tags = { + Environment = "Test" + Name = "Provider Tag" + } + } +} + +data "aws_default_tags" "example" {} + +resource "aws_autoscaling_group" "example" { + # ... + dynamic "tag" { + for_each = data.aws_default_tags.example.tags + content { + key = tag.key + value = tag.value + propagate_at_launch = true + } + } +} +``` + +## Argument Reference + +This data source has no arguments. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `tags` - Blocks of default tags set on the provider. See details below. + +### tags + +* `key` - Key name of the tag (i.e., `tags.#.key`). +* `value` - Value of the tag (i.e., `tags.#.value`). diff --git a/website/docs/d/launch_configuration.html.markdown b/website/docs/d/launch_configuration.html.markdown index 1fb1edfa3cf..33c7ea4dd73 100644 --- a/website/docs/d/launch_configuration.html.markdown +++ b/website/docs/d/launch_configuration.html.markdown @@ -57,6 +57,7 @@ In addition to all arguments above, the following attributes are exported: * `delete_on_termination` - Whether the EBS Volume will be deleted on instance termination. * `encrypted` - Whether the volume is Encrypted. * `iops` - The provisioned IOPs of the volume. +* `throughput` - The Throughput of the volume. * `volume_size` - The Size of the volume. * `volume_type` - The Type of the volume. @@ -64,12 +65,13 @@ In addition to all arguments above, the following attributes are exported: * `delete_on_termination` - Whether the EBS Volume will be deleted on instance termination. * `device_name` - The Name of the device. -* `no_device` - Whether the device in the block device mapping of the AMI is suppressed. +* `encrypted` - Whether the volume is Encrypted. * `iops` - The provisioned IOPs of the volume. +* `no_device` - Whether the device in the block device mapping of the AMI is suppressed. * `snapshot_id` - The Snapshot ID of the mount. +* `throughput` - The Throughput of the volume. * `volume_size` - The Size of the volume. * `volume_type` - The Type of the volume. -* `encrypted` - Whether the volume is Encrypted. `ephemeral_block_device` is exported with the following attributes: diff --git a/website/docs/d/msk_cluster.html.markdown b/website/docs/d/msk_cluster.html.markdown index a5ac3e56b3c..519f982fa31 100644 --- a/website/docs/d/msk_cluster.html.markdown +++ b/website/docs/d/msk_cluster.html.markdown @@ -29,9 +29,10 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: * `arn` - Amazon Resource Name (ARN) of the MSK cluster. -* `bootstrap_brokers` - A comma separated list of one or more hostname:port pairs of Kafka brokers suitable to boostrap connectivity to the Kafka cluster. Only contains value if `client_broker` encryption in transit is set to `PLAINTEXT` or `TLS_PLAINTEXT`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. -* `bootstrap_brokers_sasl_scram` - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity using SASL/SCRAM to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication` is set to `sasl`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. -* `bootstrap_brokers_tls` - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. +* `bootstrap_brokers` - Comma separated list of one or more hostname:port pairs of kafka brokers suitable to bootstrap connectivity to the kafka cluster. Contains a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `PLAINTEXT` or `TLS_PLAINTEXT`. The resource sorts values alphabetically. AWS may not always return all endpoints so this value is not guaranteed to be stable across applies. +* `bootstrap_brokers_sasl_iam` - One or more DNS names (or IP addresses) and SASL IAM port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication.0.sasl.0.iam` is set to `true`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. +* `bootstrap_brokers_sasl_scram` - One or more DNS names (or IP addresses) and SASL SCRAM port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication.0.sasl.0.scram` is set to `true`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. +* `bootstrap_brokers_tls` - One or more DNS names (or IP addresses) and TLS port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. * `kafka_version` - Apache Kafka version. * `number_of_broker_nodes` - Number of broker nodes in the cluster. * `tags` - Map of key-value pairs assigned to the cluster. diff --git a/website/docs/d/servicecatalog_constraint.html.markdown b/website/docs/d/servicecatalog_constraint.html.markdown new file mode 100644 index 00000000000..bf69c9b3067 --- /dev/null +++ b/website/docs/d/servicecatalog_constraint.html.markdown @@ -0,0 +1,44 @@ +--- +subcategory: "Service Catalog" +layout: "aws" +page_title: "AWS: aws_servicecatalog_constraint" +description: |- + Provides information on a Service Catalog Constraint +--- + +# Data source: aws_servicecatalog_constraint + +Provides information on a Service Catalog Constraint. + +## Example Usage + +### Basic Usage + +```terraform +data "aws_servicecatalog_constraint" "example" { + accept_language = "en" + id = "cons-hrvy0335" +} +``` + +## Argument Reference + +The following arguments are required: + +* `id` - Constraint identifier. + +The following arguments are optional: + +* `accept_language` - (Optional) Language code. Valid values: `en` (English), `jp` (Japanese), `zh` (Chinese). Default value is `en`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `description` - Description of the constraint. +* `owner` - Owner of the constraint. +* `parameters` - Constraint parameters in JSON format. +* `portfolio_id` - Portfolio identifier. +* `product_id` - Product identifier. +* `status` - Constraint status. +* `type` - Type of constraint. Valid values are `LAUNCH`, `NOTIFICATION`, `RESOURCE_UPDATE`, `STACKSET`, and `TEMPLATE`. diff --git a/website/docs/guides/custom-service-endpoints.html.md b/website/docs/guides/custom-service-endpoints.html.md index fc513621a1d..36c49148049 100644 --- a/website/docs/guides/custom-service-endpoints.html.md +++ b/website/docs/guides/custom-service-endpoints.html.md @@ -73,6 +73,7 @@ The Terraform AWS Provider allows the following endpoints to be customized:
  • backup
  • batch
  • budgets
  • +
  • chime
  • cloud9
  • cloudformation
  • cloudfront
  • @@ -147,6 +148,7 @@ The Terraform AWS Provider allows the following endpoints to be customized:
  • lexmodels
  • licensemanager
  • lightsail
  • +
  • location
  • macie
  • macie2
  • managedblockchain
  • @@ -183,6 +185,7 @@ The Terraform AWS Provider allows the following endpoints to be customized:
  • s3control
  • s3outposts
  • sagemaker
  • +
  • schemas
  • sdb
  • secretsmanager
  • securityhub
  • diff --git a/website/docs/index.html.markdown b/website/docs/index.html.markdown index ca240bd9902..3de4a40822c 100644 --- a/website/docs/index.html.markdown +++ b/website/docs/index.html.markdown @@ -2,17 +2,22 @@ layout: "aws" page_title: "Provider: AWS" description: |- - The Amazon Web Services (AWS) provider is used to interact with the many resources supported by AWS. The provider needs to be configured with the proper credentials before it can be used. + Use the Amazon Web Services (AWS) provider to interact with the many resources supported by AWS. You must configure the provider with the proper credentials before you can use it. --- # AWS Provider -The Amazon Web Services (AWS) provider is used to interact with the -many resources supported by AWS. The provider needs to be configured -with the proper credentials before it can be used. +Use the Amazon Web Services (AWS) provider to interact with the +many resources supported by AWS. You must configure the provider +with the proper credentials before you can use it. Use the navigation to the left to read about the available resources. +To learn the basics of Terraform using this provider, follow the +hands-on [get started tutorials](https://learn.hashicorp.com/tutorials/terraform/infrastructure-as-code?in=terraform/aws-get-started&utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) on HashiCorp's Learn platform. Interact with AWS services, +including Lambda, RDS, and IAM by following the [AWS services +tutorials](https://learn.hashicorp.com/collections/terraform/aws?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS). + ## Example Usage Terraform 0.13 and later: diff --git a/website/docs/r/acmpca_certificate_authority.html.markdown b/website/docs/r/acmpca_certificate_authority.html.markdown index b01d4fb2755..8e53c7298ce 100644 --- a/website/docs/r/acmpca_certificate_authority.html.markdown +++ b/website/docs/r/acmpca_certificate_authority.html.markdown @@ -132,6 +132,7 @@ Contains information about the certificate subject. Identifies the entity that o * `enabled` - (Optional) Boolean value that specifies whether certificate revocation lists (CRLs) are enabled. Defaults to `false`. * `expiration_in_days` - (Required) Number of days until a certificate expires. Must be between 1 and 5000. * `s3_bucket_name` - (Optional) Name of the S3 bucket that contains the CRL. If you do not provide a value for the `custom_cname` argument, the name of your S3 bucket is placed into the CRL Distribution Points extension of the issued certificate. You must specify a bucket policy that allows ACM PCA to write the CRL to your bucket. Must be less than or equal to 255 characters in length. +* `s3_object_acl` - (Optional) Determines whether the CRL will be publicly readable or privately held in the CRL Amazon S3 bucket. Defaults to `PUBLIC_READ`. ## Attributes Reference diff --git a/website/docs/r/amplify_app.html.markdown b/website/docs/r/amplify_app.html.markdown new file mode 100644 index 00000000000..5847a16116c --- /dev/null +++ b/website/docs/r/amplify_app.html.markdown @@ -0,0 +1,196 @@ +--- +subcategory: "Amplify Console" +layout: "aws" +page_title: "AWS: aws_amplify_app" +description: |- + Provides an Amplify App resource. +--- + +# Resource: aws_amplify_app + +Provides an Amplify App resource, a fullstack serverless app hosted on the [AWS Amplify Console](https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html). + +~> **Note:** When you create/update an Amplify App from Terraform, you may end up with the error "BadRequestException: You should at least provide one valid token" because of authentication issues. See the section "Repository with Tokens" below. + +## Example Usage + +```terraform +resource "aws_amplify_app" "example" { + name = "example" + repository = "https://github.com/example/app" + + # The default build_spec added by the Amplify Console for React. + build_spec = <<-EOT + version: 0.1 + frontend: + phases: + preBuild: + commands: + - yarn install + build: + commands: + - yarn run build + artifacts: + baseDirectory: build + files: + - '**/*' + cache: + paths: + - node_modules/**/* + EOT + + # The default rewrites and redirects added by the Amplify Console. + custom_rule { + source = "/<*>" + status = "404" + target = "/index.html" + } + + environment_variables = { + ENV = "test" + } +} +``` + +### Repository with Tokens + +If you create a new Amplify App with the `repository` argument, you also need to set `oauth_token` or `access_token` for authentication. For GitHub, get a [personal access token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) and set `access_token` as follows: + +```terraform +resource "aws_amplify_app" "example" { + name = "example" + repository = "https://github.com/example/app" + + # GitHub personal access token + access_token = "..." +} +``` + +You can omit `access_token` if you import an existing Amplify App created by the Amplify Console (using OAuth for authentication). + +### Auto Branch Creation + +```terraform +resource "aws_amplify_app" "example" { + name = "example" + + enable_auto_branch_creation = true + + # The default patterns added by the Amplify Console. + auto_branch_creation_patterns = [ + "*", + "*/**", + ] + + auto_branch_creation_config { + # Enable auto build for the created branch. + enable_auto_build = true + } +} +``` + +### Basic Authorization + +```terraform +resource "aws_amplify_app" "example" { + name = "example" + + enable_basic_auth = true + basic_auth_credentials = base64encode("username1:password1") +} +``` + +### Rewrites and Redirects + +```terraform +resource "aws_amplify_app" "example" { + name = "example" + + # Reverse Proxy Rewrite for API requests + # https://docs.aws.amazon.com/amplify/latest/userguide/redirects.html#reverse-proxy-rewrite + custom_rule { + source = "/api/<*>" + status = "200" + target = "https://api.example.com/api/<*>" + } + + # Redirects for Single Page Web Apps (SPA) + # https://docs.aws.amazon.com/amplify/latest/userguide/redirects.html#redirects-for-single-page-web-apps-spa + custom_rule { + source = "" + status = "200" + target = "/index.html" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name for an Amplify app. +* `access_token` - (Optional) The personal access token for a third-party source control system for an Amplify app. The personal access token is used to create a webhook and a read-only deploy key. The token is not stored. +* `auto_branch_creation_config` - (Optional) The automated branch creation configuration for an Amplify app. An `auto_branch_creation_config` block is documented below. +* `auto_branch_creation_patterns` - (Optional) The automated branch creation glob patterns for an Amplify app. +* `basic_auth_credentials` - (Optional) The credentials for basic authorization for an Amplify app. +* `build_spec` - (Optional) The [build specification](https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.html) (build spec) for an Amplify app. +* `custom_rule` - (Optional) The custom rewrite and redirect rules for an Amplify app. A `custom_rule` block is documented below. +* `description` - (Optional) The description for an Amplify app. +* `enable_auto_branch_creation` - (Optional) Enables automated branch creation for an Amplify app. +* `enable_basic_auth` - (Optional) Enables basic authorization for an Amplify app. This will apply to all branches that are part of this app. +* `enable_branch_auto_build` - (Optional) Enables auto-building of branches for the Amplify App. +* `enable_branch_auto_deletion` - (Optional) Automatically disconnects a branch in the Amplify Console when you delete a branch from your Git repository. +* `environment_variables` - (Optional) The environment variables map for an Amplify app. +* `iam_service_role_arn` - (Optional) The AWS Identity and Access Management (IAM) service role for an Amplify app. +* `oauth_token` - (Optional) The OAuth token for a third-party source control system for an Amplify app. The OAuth token is used to create a webhook and a read-only deploy key. The OAuth token is not stored. +* `platform` - (Optional) The platform or framework for an Amplify app. Valid values: `WEB`. +* `repository` - (Optional) The repository for an Amplify app. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + + +An `auto_branch_creation_config` block supports the following arguments: + +* `basic_auth_credentials` - (Optional) The basic authorization credentials for the autocreated branch. +* `build_spec` - (Optional) The build specification (build spec) for the autocreated branch. +* `enable_auto_build` - (Optional) Enables auto building for the autocreated branch. +* `enable_basic_auth` - (Optional) Enables basic authorization for the autocreated branch. +* `enable_performance_mode` - (Optional) Enables performance mode for the branch. +* `enable_pull_request_preview` - (Optional) Enables pull request previews for the autocreated branch. +* `environment_variables` - (Optional) The environment variables for the autocreated branch. +* `framework` - (Optional) The framework for the autocreated branch. +* `pull_request_environment_name` - (Optional) The Amplify environment name for the pull request. +* `stage` - (Optional) Describes the current stage for the autocreated branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`. + +A `custom_rule` block supports the following arguments: + +* `condition` - (Optional) The condition for a URL rewrite or redirect rule, such as a country code. +* `source` - (Required) The source pattern for a URL rewrite or redirect rule. +* `status` - (Optional) The status code for a URL rewrite or redirect rule. Valid values: `200`, `301`, `302`, `404`, `404-200`. +* `target` - (Required) The target pattern for a URL rewrite or redirect rule. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the Amplify app. +* `default_domain` - The default domain for the Amplify app. +* `id` - The unique ID of the Amplify app. +* `production_branch` - Describes the information about a production branch for an Amplify app. A `production_branch` block is documented below. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +A `production_branch` block supports the following attributes: + +* `branch_name` - The branch name for the production branch. +* `last_deploy_time` - The last deploy time of the production branch. +* `status` - The status of the production branch. +* `thumbnail_url` - The thumbnail URL for the production branch. + +## Import + +Amplify App can be imported using Amplify App ID (appId), e.g. + +``` +$ terraform import aws_amplify_app.example d2ypk4k47z8u6 +``` + +App ID can be obtained from App ARN (e.g. `arn:aws:amplify:us-east-1:12345678:apps/d2ypk4k47z8u6`). diff --git a/website/docs/r/amplify_backend_environment.html.markdown b/website/docs/r/amplify_backend_environment.html.markdown new file mode 100644 index 00000000000..83a7c1f1d68 --- /dev/null +++ b/website/docs/r/amplify_backend_environment.html.markdown @@ -0,0 +1,51 @@ +--- +subcategory: "Amplify Console" +layout: "aws" +page_title: "AWS: aws_amplify_backend_environment" +description: |- + Provides an Amplify Backend Environment resource. +--- + +# Resource: aws_amplify_backend_environment + +Provides an Amplify Backend Environment resource. + +## Example Usage + +```terraform +resource "aws_amplify_app" "example" { + name = "example" +} + +resource "aws_amplify_backend_environment" "example" { + app_id = aws_amplify_app.example.id + environment_name = "example" + + deployment_artifacts = "app-example-deployment" + stack_name = "amplify-app-example" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `app_id` - (Required) The unique ID for an Amplify app. +* `environment_name` - (Required) The name for the backend environment. +* `deployment_artifacts` - (Optional) The name of deployment artifacts. +* `stack_name` - (Optional) The AWS CloudFormation stack name of a backend environment. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app. +* `id` - The unique ID of the Amplify backend environment. + +## Import + +Amplify backend environment can be imported using `app_id` and `environment_name`, e.g. + +``` +$ terraform import aws_amplify_backend_environment.example d2ypk4k47z8u6/example +``` diff --git a/website/docs/r/amplify_branch.html.markdown b/website/docs/r/amplify_branch.html.markdown new file mode 100644 index 00000000000..38a78617f55 --- /dev/null +++ b/website/docs/r/amplify_branch.html.markdown @@ -0,0 +1,192 @@ +--- +subcategory: "Amplify Console" +layout: "aws" +page_title: "AWS: aws_amplify_branch" +description: |- + Provides an Amplify Branch resource. +--- + +# Resource: aws_amplify_branch + +Provides an Amplify Branch resource. + +## Example Usage + +```terraform +resource "aws_amplify_app" "example" { + name = "app" +} + +resource "aws_amplify_branch" "master" { + app_id = aws_amplify_app.example.id + branch_name = "master" + + framework = "React" + stage = "PRODUCTION" + + environment_variables = { + REACT_APP_API_SERVER = "https://api.example.com" + } +} +``` + +### Basic Authentication + +```terraform +resource "aws_amplify_app" "example" { + name = "app" +} + +resource "aws_amplify_branch" "master" { + app_id = aws_amplify_app.example.id + branch_name = "master" + + basic_auth_config { + # Enable basic authentication. + enable_basic_auth = true + + username = "username" + password = "password" + } +} +``` + +### Notifications + +Amplify Console uses CloudWatch Events and SNS for email notifications. To implement the same functionality, you need to set `enable_notification` in a `aws_amplify_branch` resource, as well as creating a CloudWatch Events Rule, a SNS topic, and SNS subscriptions. + +```terraform +resource "aws_amplify_app" "example" { + name = "app" +} + +resource "aws_amplify_branch" "master" { + app_id = aws_amplify_app.example.id + branch_name = "master" + + # Enable SNS notifications. + enable_notification = true +} + +# CloudWatch Events Rule for Amplify notifications + +resource "aws_cloudwatch_event_rule" "amplify_app_master" { + name = "amplify-${aws_amplify_app.app.id}-${aws_amplify_branch.master.branch_name}-branch-notification" + description = "AWS Amplify build notifications for : App: ${aws_amplify_app.app.id} Branch: ${aws_amplify_branch.master.branch_name}" + + event_pattern = jsonencode({ + "detail" = { + "appId" = [ + aws_amplify_app.example.id + ] + "branchName" = [ + aws_amplify_branch.master.branch_name + ], + "jobStatus" = [ + "SUCCEED", + "FAILED", + "STARTED" + ] + } + "detail-type" = [ + "Amplify Deployment Status Change" + ] + "source" = [ + "aws.amplify" + ] + }) +} + +resource "aws_cloudwatch_event_target" "amplify_app_master" { + rule = aws_cloudwatch_event_rule.amplify_app_master.name + target_id = aws_amplify_branch.master.branch_name + arn = aws_sns_topic.amplify_app_master.arn + + input_transformer { + input_paths = { + jobId = "$.detail.jobId" + appId = "$.detail.appId" + region = "$.region" + branch = "$.detail.branchName" + status = "$.detail.jobStatus" + } + + input_template = "\"Build notification from the AWS Amplify Console for app: https://..amplifyapp.com/. Your build status is . Go to https://console.aws.amazon.com/amplify/home?region=#// to view details on your build. \"" + } +} + +# SNS Topic for Amplify notifications + +resource "aws_sns_topic" "amplify_app_master" { + name = "amplify-${aws_amplify_app.app.id}_${aws_amplify_branch.master.branch_name}" +} + +data "aws_iam_policy_document" "amplify_app_master" { + statement { + sid = "Allow_Publish_Events ${aws_amplify_branch.master.arn}" + + effect = "Allow" + + actions = [ + "SNS:Publish", + ] + + principals { + type = "Service" + identifiers = [ + "events.amazonaws.com", + ] + } + + resources = [ + aws_sns_topic.amplify_app_master.arn, + ] + } +} + +resource "aws_sns_topic_policy" "amplify_app_master" { + arn = aws_sns_topic.amplify_app_master.arn + policy = data.aws_iam_policy_document.amplify_app_master.json +} +``` + +## Argument Reference + +The following arguments are supported: + +* `app_id` - (Required) The unique ID for an Amplify app. +* `branch_name` - (Required) The name for the branch. +* `backend_environment_arn` - (Optional) The Amazon Resource Name (ARN) for a backend environment that is part of an Amplify app. +* `basic_auth_credentials` - (Optional) The basic authorization credentials for the branch. +* `description` - (Optional) The description for the branch. +* `display_name` - (Optional) The display name for a branch. This is used as the default domain prefix. +* `enable_auto_build` - (Optional) Enables auto building for the branch. +* `enable_basic_auth` - (Optional) Enables basic authorization for the branch. +* `enable_notifications` - (Optional) Enables notifications for the branch. +* `enable_performance_mode` - (Optional) Enables performance mode for the branch. +* `enable_pull_request_preview` - (Optional) Enables pull request previews for this branch. +* `environment_variables` - (Optional) The environment variables for the branch. +* `framework` - (Optional) The framework for the branch. +* `pull_request_environment_name` - (Optional) The Amplify environment name for the pull request. +* `stage` - (Optional) Describes the current stage for the branch. Valid values: `PRODUCTION`, `BETA`, `DEVELOPMENT`, `EXPERIMENTAL`, `PULL_REQUEST`. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `ttl` - (Optional) The content Time To Live (TTL) for the website in seconds. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) for the branch. +* `associated_resources` - A list of custom resources that are linked to this branch. +* `custom_domains` - The custom domains for the branch. +* `destination_branch` - The destination branch if the branch is a pull request branch. +* `source_branch` - The source branch if the branch is a pull request branch. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Import + +Amplify branch can be imported using `app_id` and `branch_name`, e.g. + +``` +$ terraform import aws_amplify_branch.master d2ypk4k47z8u6/master +``` diff --git a/website/docs/r/amplify_domain_association.html.markdown b/website/docs/r/amplify_domain_association.html.markdown new file mode 100644 index 00000000000..51c5d3cd8fb --- /dev/null +++ b/website/docs/r/amplify_domain_association.html.markdown @@ -0,0 +1,82 @@ +--- +subcategory: "Amplify Console" +layout: "aws" +page_title: "AWS: aws_amplify_domain_association" +description: |- + Provides an Amplify Domain Association resource. +--- + +# Resource: aws_amplify_domain_association + +Provides an Amplify Domain Association resource. + +## Example Usage + +```terraform +resource "aws_amplify_app" "example" { + name = "app" + + # Setup redirect from https://example.com to https://www.example.com + custom_rule { + source = "https://example.com" + status = "302" + target = "https://www.example.com" + } +} + +resource "aws_amplify_branch" "master" { + app_id = aws_amplify_app.example.id + branch_name = "master" +} + +resource "aws_amplify_domain_association" "example" { + app_id = aws_amplify_app.example.id + domain_name = "example.com" + + # https://example.com + sub_domain { + branch_name = aws_amplify_branch.master.branch_name + prefix = "" + } + + # https://www.example.com + sub_domain { + branch_name = aws_amplify_branch.master.branch_name + prefix = "www" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `app_id` - (Required) The unique ID for an Amplify app. +* `domain_name` - (Required) The domain name for the domain association. +* `sub_domain` - (Required) The setting for the subdomain. Documented below. +* `wait_for_verification` - (Optional) If enabled, the resource will wait for the domain association status to change to `PENDING_DEPLOYMENT` or `AVAILABLE`. Setting this to `false` will skip the process. Default: `true`. + +The `sub_domain` configuration block supports the following arguments: + +* `branch_name` - (Required) The branch name setting for the subdomain. +* `prefix` - (Required) The prefix setting for the subdomain. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) for the domain association. +* `certificate_verification_dns_record` - The DNS record for certificate verification. + +The `sub_domain` configuration block exports the following attributes: + +* `dns_record` - The DNS record for the subdomain. +* `verified` - The verified status of the subdomain. + +## Import + +Amplify domain association can be imported using `app_id` and `domain_name`, e.g. + +``` +$ terraform import aws_amplify_domain_association.app d2ypk4k47z8u6/example.com +``` diff --git a/website/docs/r/amplify_webhook.html.markdown b/website/docs/r/amplify_webhook.html.markdown new file mode 100644 index 00000000000..2a0ea85bb96 --- /dev/null +++ b/website/docs/r/amplify_webhook.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "Amplify Console" +layout: "aws" +page_title: "AWS: aws_amplify_webhook" +description: |- + Provides an Amplify Webhook resource. +--- + +# Resource: aws_amplify_webhook + +Provides an Amplify Webhook resource. + +## Example Usage + +```terraform +resource "aws_amplify_app" "example" { + name = "app" +} + +resource "aws_amplify_branch" "master" { + app_id = aws_amplify_app.example.id + branch_name = "master" +} + +resource "aws_amplify_webhook" "master" { + app_id = aws_amplify_app.example.id + branch_name = aws_amplify_branch.master.branch_name + description = "triggermaster" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `app_id` - (Required) The unique ID for an Amplify app. +* `branch_name` - (Required) The name for a branch that is part of the Amplify app. +* `description` - (Optional) The description for a webhook. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) for the webhook. +* `url` - The URL of the webhook. + +## Import + +Amplify webhook can be imported using a webhook ID, e.g. + +``` +$ terraform import aws_amplify_webhook.master a26b22a0-748b-4b57-b9a0-ae7e601fe4b1 +``` diff --git a/website/docs/r/cloudfront_distribution.html.markdown b/website/docs/r/cloudfront_distribution.html.markdown index aee9eb13100..a56ebf54041 100644 --- a/website/docs/r/cloudfront_distribution.html.markdown +++ b/website/docs/r/cloudfront_distribution.html.markdown @@ -452,6 +452,10 @@ argument should not be specified. #### Origin Arguments +* `connection_attempts` (Optional) - The number of times that CloudFront attempts to connect to the origin. Must be between 1-3. Defaults to 3. + +* `connection_timeout` (Optional) - The number of seconds that CloudFront waits when trying to establish a connection to the origin. Must be between 1-10. Defaults to 10. + * `custom_origin_config` - The [CloudFront custom origin](#custom-origin-config-arguments) configuration information. If an S3 origin is required, use `s3_origin_config` instead. @@ -469,6 +473,9 @@ argument should not be specified. request your content from a directory in your Amazon S3 bucket or your custom origin. +* `origin_shield` - The [CloudFront Origin Shield](#origin-shield-arguments) + configuration information. Using Origin Shield can help reduce the load on your origin. For more information, see [Using Origin Shield](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/origin-shield.html) in the Amazon CloudFront Developer Guide. + * `s3_origin_config` - The [CloudFront S3 origin](#s3-origin-config-arguments) configuration information. If a custom origin is required, use `custom_origin_config` instead. @@ -490,6 +497,12 @@ argument should not be specified. * `origin_read_timeout` - (Optional) The Custom Read timeout, in seconds. By default, AWS enforces a limit of `60`. But you can request an [increase](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#request-custom-request-timeout). +##### Origin Shield Arguments + +* `enabled` (Required) - A flag that specifies whether Origin Shield is enabled. + +* `origin_shield_region` (Required) - The AWS Region for Origin Shield. To specify a region, use the region code, not the region name. For example, specify the US East (Ohio) region as us-east-2. + ##### S3 Origin Config Arguments * `origin_access_identity` (Optional) - The [CloudFront origin access diff --git a/website/docs/r/cloudtrail.html.markdown b/website/docs/r/cloudtrail.html.markdown index a1509ea44cf..bb54b4e643b 100644 --- a/website/docs/r/cloudtrail.html.markdown +++ b/website/docs/r/cloudtrail.html.markdown @@ -10,9 +10,9 @@ description: |- Provides a CloudTrail resource. -~> *NOTE:* For a multi-region trail, this resource must be in the home region of the trail. +-> **Tip:** For a multi-region trail, this resource must be in the home region of the trail. -~> *NOTE:* For an organization trail, this resource must be in the master account of the organization. +-> **Tip:** For an organization trail, this resource must be in the master account of the organization. ## Example Usage @@ -149,59 +149,57 @@ resource "aws_cloudtrail" "example" { ## Argument Reference -The following arguments are supported: - -* `name` - (Required) Specifies the name of the trail. -* `s3_bucket_name` - (Required) Specifies the name of the S3 bucket designated for publishing log files. -* `s3_key_prefix` - (Optional) Specifies the S3 key prefix that follows - the name of the bucket you have designated for log file delivery. -* `cloud_watch_logs_role_arn` - (Optional) Specifies the role for the CloudWatch Logs - endpoint to assume to write to a user’s log group. -* `cloud_watch_logs_group_arn` - (Optional) Specifies a log group name using an Amazon Resource Name (ARN), - that represents the log group to which CloudTrail logs will be delivered. Note that CloudTrail requires the Log Stream wildcard. -* `enable_logging` - (Optional) Enables logging for the trail. Defaults to `true`. - Setting this to `false` will pause logging. -* `include_global_service_events` - (Optional) Specifies whether the trail is publishing events - from global services such as IAM to the log files. Defaults to `true`. -* `is_multi_region_trail` - (Optional) Specifies whether the trail is created in the current - region or in all regions. Defaults to `false`. -* `is_organization_trail` - (Optional) Specifies whether the trail is an AWS Organizations trail. Organization trails log events for the master account and all member accounts. Can only be created in the organization master account. Defaults to `false`. -* `sns_topic_name` - (Optional) Specifies the name of the Amazon SNS topic - defined for notification of log file delivery. -* `enable_log_file_validation` - (Optional) Specifies whether log file integrity validation is enabled. - Defaults to `false`. -* `kms_key_id` - (Optional) Specifies the KMS key ARN to use to encrypt the logs delivered by CloudTrail. -* `event_selector` - (Optional) Specifies an event selector for enabling data event logging. Fields documented below. Please note the [CloudTrail limits](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) when configuring these. -* `insight_selector` - (Optional) Specifies an insight selector for identifying unusual operational activity. Fields documented below. -* `tags` - (Optional) A map of tags to assign to the trail. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. - -### Event Selector Arguments -For **event_selector** the following attributes are supported. - -* `read_write_type` (Optional) - Specify if you want your trail to log read-only events, write-only events, or all. By default, the value is All. You can specify only the following value: "ReadOnly", "WriteOnly", "All". Defaults to `All`. -* `include_management_events` (Optional) - Specify if you want your event selector to include management events for your trail. -* `data_resource` (Optional) - Specifies logging data events. Fields documented below. - -#### Data Resource Arguments -For **data_resource** the following attributes are supported. - -* `type` (Required) - The resource type in which you want to log data events. You can specify only the following value: "AWS::S3::Object", "AWS::Lambda::Function" -* `values` (Required) - A list of ARN for the specified S3 buckets and object prefixes.. - -### Insight Selector Arguments - -For **insight_selector** the following attributes are supported. - -* `insight_type` (Optional) - The type of insights to log on a trail. In this release, only `ApiCallRateInsight` is supported as an insight type. +The following arguments are required: + +* `name` - (Required) Name of the trail. +* `s3_bucket_name` - (Required) Name of the S3 bucket designated for publishing log files. + +The following arguments are optional: + +* `cloud_watch_logs_group_arn` - (Optional) Log group name using an ARN that represents the log group to which CloudTrail logs will be delivered. Note that CloudTrail requires the Log Stream wildcard. +* `cloud_watch_logs_role_arn` - (Optional) Role for the CloudWatch Logs endpoint to assume to write to a user’s log group. +* `enable_log_file_validation` - (Optional) Whether log file integrity validation is enabled. Defaults to `false`. +* `enable_logging` - (Optional) Enables logging for the trail. Defaults to `true`. Setting this to `false` will pause logging. +* `event_selector` - (Optional) Configuration block of an event selector for enabling data event logging. See details below. Please note the [CloudTrail limits](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) when configuring these. +* `include_global_service_events` - (Optional) Whether the trail is publishing events from global services such as IAM to the log files. Defaults to `true`. +* `insight_selector` - (Optional) Configuration block for identifying unusual operational activity. See details below. +* `is_multi_region_trail` - (Optional) Whether the trail is created in the current region or in all regions. Defaults to `false`. +* `is_organization_trail` - (Optional) Whether the trail is an AWS Organizations trail. Organization trails log events for the master account and all member accounts. Can only be created in the organization master account. Defaults to `false`. +* `kms_key_id` - (Optional) KMS key ARN to use to encrypt the logs delivered by CloudTrail. +* `s3_key_prefix` - (Optional) S3 key prefix that follows the name of the bucket you have designated for log file delivery. +* `sns_topic_name` - (Optional) Name of the Amazon SNS topic defined for notification of log file delivery. +* `tags` - (Optional) Map of tags to assign to the trail. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +### event_selector + +This configuration block supports the following attributes: + +* `data_resource` - (Optional) Configuration block for data events. See details below. +* `include_management_events` - (Optional) Whether to include management events for your trail. +* `read_write_type` - (Optional) Type of events to log. Valid values are `ReadOnly`, `WriteOnly`, `All`. Default value is `All`. + +#### data_resource + +This configuration block supports the following attributes: + +* `type` - (Required) Resource type in which you want to log data events. You can specify only the following value: "AWS::S3::Object", "AWS::Lambda::Function" and "AWS::DynamoDB::Table". +* `values` - (Required) List of ARN strings or partial ARN strings to specify selectors for data audit events over data resources. ARN list is specific to single-valued `type`. For example, `arn:aws:s3:::/` for all objects in a bucket, `arn:aws:s3:::/key` for specific objects, `arn:aws:lambda` for all lambda events within an account, `arn:aws:lambda:::function:` for a specific Lambda function, `arn:aws:dynamodb` for all DDB events for all tables within an account, or `arn:aws:dynamodb:::table/` for a specific DynamoDB table. + + +### insight_selector + +This configuration block supports the following attributes: + +* `insight_type` - (Optional) Type of insights to log on a trail. The valid value is `ApiCallRateInsight`. ## Attributes Reference In addition to all arguments above, the following attributes are exported: -* `id` - The name of the trail. -* `home_region` - The region in which the trail was created. -* `arn` - The Amazon Resource Name of the trail. -* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). +* `arn` - ARN of the trail. +* `home_region` - Region in which the trail was created. +* `id` - Name of the trail. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Import diff --git a/website/docs/r/cloudwatch_event_api_destination.html.markdown b/website/docs/r/cloudwatch_event_api_destination.html.markdown new file mode 100644 index 00000000000..c2bf1eca05b --- /dev/null +++ b/website/docs/r/cloudwatch_event_api_destination.html.markdown @@ -0,0 +1,53 @@ +--- +subcategory: "EventBridge (CloudWatch Events)" +layout: "aws" +page_title: "AWS: aws_cloudwatch_event_api_destination" +description: |- + Provides an EventBridge event API Destination resource. +--- + +# Resource: aws_cloudwatch_event_api_destination + +Provides an EventBridge event API Destination resource. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + + +## Example Usage + +```terraform +resource "aws_cloudwatch_event_api_destination" "test" { + name = "api-destination" + description = "An API Destination" + invocation_endpoint = "https://api.destination.com/endpoint" + http_method = "POST" + invocation_rate_limit_per_second = 20 + connection_arn = aws_cloudwatch_event_connection.test.arn +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the new API Destination. The name must be unique for your account. Maximum of 64 characters consisting of numbers, lower/upper case letters, .,-,_. +* `description` - (Optional) The description of the new API Destination. Maximum of 512 characters. +* `invocation_endpoint` - (Required) URL endpoint to invoke as a target. This could be a valid endpoint generated by a partner service. You can include "*" as path parameters wildcards to be set from the Target HttpParameters. +* `http_method` - (Required) Select the HTTP method used for the invocation endpoint, such as GET, POST, PUT, etc. +* `invocation_rate_limit_per_second` - (Optional) Enter the maximum number of invocations per second to allow for this destination. Enter a value greater than 0 (default 300). +* `connection_arn` - (Required) ARN of the EventBridge Connection to use for the API Destination. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the event API Destination. + + +## Import + +EventBridge API Destinations can be imported using the `name`, e.g. + +```console +$ terraform import aws_cloudwatch_event_api_destination.test api-destination +``` diff --git a/website/docs/r/cloudwatch_event_connection.html.markdown b/website/docs/r/cloudwatch_event_connection.html.markdown new file mode 100644 index 00000000000..972e6893e4b --- /dev/null +++ b/website/docs/r/cloudwatch_event_connection.html.markdown @@ -0,0 +1,201 @@ +--- +subcategory: "EventBridge (CloudWatch Events)" +layout: "aws" +page_title: "AWS: aws_cloudwatch_event_connection" +description: |- + Provides an EventBridge connection resource. +--- + +# Resource: aws_cloudwatch_event_connection + +Provides an EventBridge connection resource. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + + +## Example Usage + +```terraform +resource "aws_cloudwatch_event_connection" "test" { + name = "ngrok-connection" + description = "A connection description" + authorization_type = "API_KEY" + + auth_parameters { + api_key { + key = "x-signature" + value = "1234" + } + } +} +``` + +## Example Usage Basic Authorization + +```terraform +resource "aws_cloudwatch_event_connection" "test" { + name = "ngrok-connection" + description = "A connection description" + authorization_type = "BASIC" + + auth_parameters { + basic { + username = "user" + password = "Pass1234!" + } + } +} +``` + +## Example Usage OAuth Authorization + +```terraform +resource "aws_cloudwatch_event_connection" "test" { + name = "ngrok-connection" + description = "A connection description" + authorization_type = "BASIC" + + auth_parameters { + oauth { + authorization_endpoint = "https://auth.url.com/endpoint" + http_method = "GET" + + client_parameters { + client_id = "1234567890" + client_secret = "Pass1234!" + } + + oauth_http_parameters { + body { + key = "body-parameter-key" + value = "body-parameter-value" + is_value_secret = false + } + + header { + key = "header-parameter-key" + value = "header-parameter-value" + is_value_secret = false + } + + query_string { + key = "query-string-parameter-key" + value = "query-string-parameter-value" + is_value_secret = false + } + } + } + } +} +``` + +## Example Usage Invocation Http Parameters + +```terraform +resource "aws_cloudwatch_event_connection" "test" { + name = "ngrok-connection" + description = "A connection description" + authorization_type = "BASIC" + + auth_parameters { + basic { + username = "user" + password = "Pass1234!" + } + + invocation_http_parameters { + body { + key = "body-parameter-key" + value = "body-parameter-value" + is_value_secret = false + } + + body { + key = "body-parameter-key2" + value = "body-parameter-value2" + is_value_secret = true + } + + header { + key = "header-parameter-key" + value = "header-parameter-value" + is_value_secret = false + } + + query_string { + key = "query-string-parameter-key" + value = "query-string-parameter-value" + is_value_secret = false + } + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the new connection. Maximum of 64 characters consisting of numbers, lower/upper case letters, .,-,_. +* `description` - (Optional) Enter a description for the connection. Maximum of 512 characters. +* `authorization_type` - (Required) Choose the type of authorization to use for the connection. One of `API_KEY`,`BASIC`,`OAUTH_CLIENT_CREDENTIALS`. +* `auth_parameters` - (Required) Parameters used for authorization. A maximum of 1 are allowed. Documented below. +* `invocation_http_parameters` - (Optional) Invocation Http Parameters are additional credentials used to sign each Invocation of the ApiDestination created from this Connection. If the ApiDestination Rule Target has additional HttpParameters, the values will be merged together, with the Connection Invocation Http Parameters taking precedence. Secret values are stored and managed by AWS Secrets Manager. A maximum of 1 are allowed. Documented below. + +`auth_parameters` support the following: + +* `api_key` - (Optional) Parameters used for API_KEY authorization. An API key to include in the header for each authentication request. A maximum of 1 are allowed. Conflicts with `basic` and `oauth`. Documented below. +* `basic` - (Optional) Parameters used for BASIC authorization. A maximum of 1 are allowed. Conflicts with `api_key` and `oauth`. Documented below. +* `oauth` - (Optional) Parameters used for OAUTH_CLIENT_CREDENTIALS authorization. A maximum of 1 are allowed. Conflicts with `basic` and `api_key`. Documented below. + +`api_key` support the following: + +* `key` - (Required) Header Name. +* `value` - (Required) Header Value. Created and stored in AWS Secrets Manager. + +`basic` support the following: + +* `username` - (Required) A username for the authorization. +* `password` - (Required) A password for the authorization. Created and stored in AWS Secrets Manager. + +`oauth` support the following: + +* `authorization_endpoint` - (Required) A username for the authorization. +* `http_method` - (Required) A password for the authorization. Created and stored in AWS Secrets Manager. +* `client_parameters` - (Required) Contains the client parameters for OAuth authorization. Contains the following two parameters. + * `client_id` - (Required) The client ID for the credentials to use for authorization. Created and stored in AWS Secrets Manager. + * `client_secret` - (Required) The client secret for the credentials to use for authorization. Created and stored in AWS Secrets Manager. +* `oauth_http_parameters` - (Required) OAuth Http Parameters are additional credentials used to sign the request to the authorization endpoint to exchange the OAuth Client information for an access token. Secret values are stored and managed by AWS Secrets Manager. A maximum of 1 are allowed. Documented below. + +`invocation_http_parameters` and `oauth_http_parameters` support the following: + +* `body` - (Optional) Contains additional body string parameters for the connection. You can include up to 100 additional body string parameters per request. Each additional parameter counts towards the event payload size, which cannot exceed 64 KB. Each parameter can contain the following: + * `key` - (Required) The key for the parameter. + * `value` - (Required) The value associated with the key. Created and stored in AWS Secrets Manager if is secret. + * `is_value_secret` - (Optional) Specified whether the value is secret. + +* `header` - (Optional) Contains additional header parameters for the connection. You can include up to 100 additional body string parameters per request. Each additional parameter counts towards the event payload size, which cannot exceed 64 KB. Each parameter can contain the following: + * `key` - (Required) The key for the parameter. + * `value` - (Required) The value associated with the key. Created and stored in AWS Secrets Manager if is secret. + * `is_value_secret` - (Optional) Specified whether the value is secret. + +* `query_string` - (Optional) Contains additional query string parameters for the connection. You can include up to 100 additional body string parameters per request. Each additional parameter counts towards the event payload size, which cannot exceed 64 KB. Each parameter can contain the following: + * `key` - (Required) The key for the parameter. + * `value` - (Required) The value associated with the key. Created and stored in AWS Secrets Manager if is secret. + * `is_value_secret` - (Optional) Specified whether the value is secret. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the connection. +* `secret_arn` - The Amazon Resource Name (ARN) of the secret created from the authorization parameters specified for the connection. + + +## Import + +EventBridge Connection can be imported using the `name`, e.g. + +```console +$ terraform import aws_cloudwatch_event_connection.test ngrok-connection +``` diff --git a/website/docs/r/cloudwatch_log_metric_filter.html.markdown b/website/docs/r/cloudwatch_log_metric_filter.html.markdown index 61e7c068b4e..0d9721a5d86 100644 --- a/website/docs/r/cloudwatch_log_metric_filter.html.markdown +++ b/website/docs/r/cloudwatch_log_metric_filter.html.markdown @@ -45,7 +45,8 @@ The `metric_transformation` block supports the following arguments: * `name` - (Required) The name of the CloudWatch metric to which the monitored log information should be published (e.g. `ErrorCount`) * `namespace` - (Required) The destination namespace of the CloudWatch metric. * `value` - (Required) What to publish to the metric. For example, if you're counting the occurrences of a particular term like "Error", the value will be "1" for each occurrence. If you're counting the bytes transferred the published value will be the value in the log event. -* `default_value` - (Optional) The value to emit when a filter pattern does not match a log event. +* `default_value` - (Optional) The value to emit when a filter pattern does not match a log event. Conflicts with `dimensions`. +* `dimensions` - (Optional) Map of fields to use as dimensions for the metric. Up to 3 dimensions are allowed. Conflicts with `default_value`. ## Attributes Reference diff --git a/website/docs/r/codebuild_project.html.markdown b/website/docs/r/codebuild_project.html.markdown index cdf681092f3..ddec94b02c7 100755 --- a/website/docs/r/codebuild_project.html.markdown +++ b/website/docs/r/codebuild_project.html.markdown @@ -353,12 +353,12 @@ This block is only valid when the `type` is `CODECOMMIT`, `GITHUB` or `GITHUB_EN * `fetch_submodules` - (Required) Whether to fetch Git submodules for the AWS CodeBuild build project. -`build_status_config` supports the following: +#### secondary_sources: build_status_config * `context` - (Optional) Specifies the context of the build status CodeBuild sends to the source provider. The usage of this parameter depends on the source provider. * `target_url` - (Optional) Specifies the target url of the build status CodeBuild sends to the source provider. The usage of this parameter depends on the source provider. -`vpc_config` supports the following: +### source * `auth` - (Optional, **Deprecated**) Configuration block with the authorization settings for AWS CodeBuild to access the source code to be built. This information is for the AWS CodeBuild console's use only. Use the [`aws_codebuild_source_credential` resource](codebuild_source_credential.html) instead. Auth blocks are documented below. * `buildspec` - (Optional) Build specification to use for this build project's related builds. This must be set when `type` is `NO_SOURCE`. diff --git a/website/docs/r/codebuild_source_credential.html.markdown b/website/docs/r/codebuild_source_credential.html.markdown index 60c14a93aa4..cb995824d6e 100644 --- a/website/docs/r/codebuild_source_credential.html.markdown +++ b/website/docs/r/codebuild_source_credential.html.markdown @@ -10,6 +10,9 @@ description: |- Provides a CodeBuild Source Credentials Resource. +~> **NOTE:** +[Codebuild only allows a single credential per given server type in a given region](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_codebuild.GitHubSourceCredentials.html). Therefore, when you define `aws_codebuild_source_credential`, [`aws_codebuild_project` resource](/docs/providers/aws/r/codebuild_project.html) defined in the same module will use it. + ## Example Usage ```terraform diff --git a/website/docs/r/devicefarm_project.html.markdown b/website/docs/r/devicefarm_project.html.markdown index 725f073618c..7581da69504 100644 --- a/website/docs/r/devicefarm_project.html.markdown +++ b/website/docs/r/devicefarm_project.html.markdown @@ -27,12 +27,15 @@ resource "aws_devicefarm_project" "awesome_devices" { ## Argument Reference * `name` - (Required) The name of the project +* `default_job_timeout_minutes` - (Optional) Sets the execution timeout value (in minutes) for a project. All test runs in this project use the specified execution timeout value unless overridden when scheduling a run. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ## Attributes Reference In addition to all arguments above, the following attributes are exported: * `arn` - The Amazon Resource Name of this project +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). [aws-get-project]: http://docs.aws.amazon.com/devicefarm/latest/APIReference/API_GetProject.html diff --git a/website/docs/r/dms_certificate.html.markdown b/website/docs/r/dms_certificate.html.markdown index 6fac6857e5d..f2667ed516e 100644 --- a/website/docs/r/dms_certificate.html.markdown +++ b/website/docs/r/dms_certificate.html.markdown @@ -49,8 +49,8 @@ In addition to all arguments above, the following attributes are exported: ## Import -Certificates can be imported using the `certificate_arn`, e.g. +Certificates can be imported using the `certificate_id`, e.g. ``` -$ terraform import aws_dms_certificate.test arn:aws:dms:us-west-2:123456789:cert:xxxxxxxxxx +$ terraform import aws_dms_certificate.test test-dms-certificate-tf ``` diff --git a/website/docs/r/ec2_capacity_reservation.html.markdown b/website/docs/r/ec2_capacity_reservation.html.markdown index fa70669e024..3d61d70f24d 100644 --- a/website/docs/r/ec2_capacity_reservation.html.markdown +++ b/website/docs/r/ec2_capacity_reservation.html.markdown @@ -34,6 +34,7 @@ The following arguments are supported: * `instance_match_criteria` - (Optional) Indicates the type of instance launches that the Capacity Reservation accepts. Specify either `open` or `targeted`. * `instance_platform` - (Required) The type of operating system for which to reserve capacity. Valid options are `Linux/UNIX`, `Red Hat Enterprise Linux`, `SUSE Linux`, `Windows`, `Windows with SQL Server`, `Windows with SQL Server Enterprise`, `Windows with SQL Server Standard` or `Windows with SQL Server Web`. * `instance_type` - (Required) The instance type for which to reserve capacity. +* `outpost_arn` - (Optional) The Amazon Resource Name (ARN) of the Outpost on which to create the Capacity Reservation. * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. * `tenancy` - (Optional) Indicates the tenancy of the Capacity Reservation. Specify either `default` or `dedicated`. diff --git a/website/docs/r/ec2_traffic_mirror_target.html.markdown b/website/docs/r/ec2_traffic_mirror_target.html.markdown index b25f5b03312..af7c1619a58 100644 --- a/website/docs/r/ec2_traffic_mirror_target.html.markdown +++ b/website/docs/r/ec2_traffic_mirror_target.html.markdown @@ -8,7 +8,7 @@ description: |- # Resource: aws_ec2_traffic_mirror_target -Provides an Traffic mirror target. +Provides a Traffic mirror target. Read [limits and considerations](https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html) for traffic mirroring ## Example Usage diff --git a/website/docs/r/ecs_service.html.markdown b/website/docs/r/ecs_service.html.markdown index e7c2e988a47..6d2d448391c 100644 --- a/website/docs/r/ecs_service.html.markdown +++ b/website/docs/r/ecs_service.html.markdown @@ -105,7 +105,7 @@ The following arguments are optional: * `force_new_deployment` - (Optional) Enable to force a new task deployment of the service. This can be used to update tasks to use a newer Docker image with same image/tag combination (e.g. `myimage:latest`), roll Fargate tasks onto a newer platform version, or immediately deploy `ordered_placement_strategy` and `placement_constraints` updates. * `health_check_grace_period_seconds` - (Optional) Seconds to ignore failing load balancer health checks on newly instantiated tasks to prevent premature shutdown, up to 2147483647. Only valid for services configured to use load balancers. * `iam_role` - (Optional) ARN of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is required if you are using a load balancer with your service, but only if your task definition does not use the `awsvpc` network mode. If using `awsvpc` network mode, do not specify this role. If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. -* `launch_type` - (Optional) Launch type on which to run your service. The valid values are `EC2` and `FARGATE`. Defaults to `EC2`. +* `launch_type` - (Optional) Launch type on which to run your service. The valid values are `EC2`, `FARGATE`, and `EXTERNAL`. Defaults to `EC2`. * `load_balancer` - (Optional) Configuration block for load balancers. Detailed below. * `network_configuration` - (Optional) Network configuration for the service. This parameter is required for task definitions that use the `awsvpc` network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. Detailed below. * `ordered_placement_strategy` - (Optional) Service level strategy rules that are taken into consideration during task placement. List from top to bottom in order of precedence. Updates to this configuration will take effect next task deployment unless `force_new_deployment` is enabled. The maximum number of `ordered_placement_strategy` blocks is `5`. Detailed below. diff --git a/website/docs/r/eks_node_group.html.markdown b/website/docs/r/eks_node_group.html.markdown index e021880b5c8..f7d4f6490d4 100644 --- a/website/docs/r/eks_node_group.html.markdown +++ b/website/docs/r/eks_node_group.html.markdown @@ -134,6 +134,7 @@ The following arguments are optional: * `release_version` – (Optional) AMI version of the EKS Node Group. Defaults to latest version for Kubernetes version. * `remote_access` - (Optional) Configuration block with remote access settings. Detailed below. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `taint` - (Optional) The Kubernetes taints to be applied to the nodes in the node group. Maximum of 50 taints per node group. Detailed below. * `version` – (Optional) Kubernetes version. Defaults to EKS Cluster Kubernetes version. Terraform will only perform drift detection if a configuration value is provided. ### launch_template Configuration Block @@ -155,6 +156,12 @@ The following arguments are optional: * `max_size` - (Required) Maximum number of worker nodes. * `min_size` - (Required) Minimum number of worker nodes. +### taint Configuration Block + +* `key` - (Required) The key of the taint. Maximum length of 63. +* `value` - (Optional) The value of the taint. Maximum length of 63. +* `effect` - (Required) The effect of the taint. Valid values: `NO_SCHEDULE`, `NO_EXECUTE`, `PREFER_NO_SCHEDULE`. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/elasticache_parameter_group.html.markdown b/website/docs/r/elasticache_parameter_group.html.markdown index d3a2effb7f2..4b6273b51c2 100644 --- a/website/docs/r/elasticache_parameter_group.html.markdown +++ b/website/docs/r/elasticache_parameter_group.html.markdown @@ -39,6 +39,7 @@ The following arguments are supported: * `family` - (Required) The family of the ElastiCache parameter group. * `description` - (Optional) The description of the ElastiCache parameter group. Defaults to "Managed by Terraform". * `parameter` - (Optional) A list of ElastiCache parameters to apply. +* `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. Parameter blocks support the following: @@ -50,6 +51,8 @@ Parameter blocks support the following: In addition to all arguments above, the following attributes are exported: * `id` - The ElastiCache parameter group name. +* `arn` - The AWS ARN associated with the parameter group. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Import diff --git a/website/docs/r/fsx_lustre_file_system.html.markdown b/website/docs/r/fsx_lustre_file_system.html.markdown index 179417826f2..ae3e53478d1 100644 --- a/website/docs/r/fsx_lustre_file_system.html.markdown +++ b/website/docs/r/fsx_lustre_file_system.html.markdown @@ -24,7 +24,7 @@ resource "aws_fsx_lustre_file_system" "example" { The following arguments are supported: -* `storage_capacity` - (Required) The storage capacity (GiB) of the file system. Minimum of `1200`. Storage capacity is provisioned in increments of 3,600 GiB. +* `storage_capacity` - (Required) The storage capacity (GiB) of the file system. Minimum of `1200`. See more details at [Allowed values for Fsx storage capacity](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html#FSx-CreateFileSystem-request-StorageCapacity). Update is allowed only for `SCRATCH_2` and `PERSISTENT_1` deployment types, See more details at [Fsx Storage Capacity Update](https://docs.aws.amazon.com/fsx/latest/APIReference/API_UpdateFileSystem.html#FSx-UpdateFileSystem-request-StorageCapacity). * `subnet_ids` - (Required) A list of IDs for the subnets that the file system will be accessible from. File systems currently support only one subnet. The file server is also launched in that subnet's Availability Zone. * `export_path` - (Optional) S3 URI (with optional prefix) where the root of your Amazon FSx file system is exported. Can only be specified with `import_path` argument and the path must use the same Amazon S3 bucket as specified in `import_path`. Set equal to `import_path` to overwrite files on export. Defaults to `s3://{IMPORT BUCKET}/FSxLustre{CREATION TIMESTAMP}`. * `import_path` - (Optional) S3 URI (with optional prefix) that you're using as the data repository for your FSx for Lustre file system. For example, `s3://example-bucket/optional-prefix/`. diff --git a/website/docs/r/glue_connection.html.markdown b/website/docs/r/glue_connection.html.markdown index 51544bb7b7e..a3bfaea58f4 100644 --- a/website/docs/r/glue_connection.html.markdown +++ b/website/docs/r/glue_connection.html.markdown @@ -53,7 +53,7 @@ resource "aws_glue_connection" "example" { The following arguments are supported: * `catalog_id` – (Optional) The ID of the Data Catalog in which to create the connection. If none is supplied, the AWS account ID is used by default. -* `connection_properties` – (Required) A map of key-value pairs used as parameters for this connection. +* `connection_properties` – (Optional) A map of key-value pairs used as parameters for this connection. * `connection_type` – (Optional) The type of the connection. Supported are: `JDBC`, `MONGODB`, `KAFKA`, and `NETWORK`. Defaults to `JBDC`. * `description` – (Optional) Description of the connection. * `match_criteria` – (Optional) A list of criteria that can be used in selecting this connection. diff --git a/website/docs/r/lambda_event_source_mapping.html.markdown b/website/docs/r/lambda_event_source_mapping.html.markdown index 9ffda81bef3..8c529aed398 100644 --- a/website/docs/r/lambda_event_source_mapping.html.markdown +++ b/website/docs/r/lambda_event_source_mapping.html.markdown @@ -46,6 +46,37 @@ resource "aws_lambda_event_source_mapping" "example" { } ``` +### Self Managed Apache Kafka + +```terraform +resource "aws_lambda_event_source_mapping" "example" { + function_name = aws_lambda_function.example.arn + topics = ["Example"] + starting_position = "TRIM_HORIZON" + + self_managed_event_source { + endpoints = { + KAFKA_BOOTSTRAP_SERVERS = "kafka1.example.com:9092,kafka2.example.com:9092" + } + } + + source_access_configuration { + type = "VPC_SUBNET" + uri = "subnet:subnet-example1" + } + + source_access_configuration { + type = "VPC_SUBNET" + uri = "subnet:subnet-example2" + } + + source_access_configuration { + type = "VPC_SECURITY_GROUP" + uri = "security_group:sg-example" + } +} +``` + ### SQS ```terraform @@ -57,19 +88,24 @@ resource "aws_lambda_event_source_mapping" "example" { ## Argument Reference -* `batch_size` - (Optional) The largest number of records that Lambda will retrieve from your event source at the time of invocation. Defaults to `100` for DynamoDB, Kinesis and MSK, `10` for SQS. -* `maximum_batching_window_in_seconds` - (Optional) The maximum amount of time to gather records before invoking the function, in seconds (between 0 and 300). Records will continue to buffer (or accumulate in the case of an SQS queue event source) until either `maximum_batching_window_in_seconds` expires or `batch_size` has been met. For streaming event sources, defaults to as soon as records are available in the stream. If the batch it reads from the stream/queue only has one record in it, Lambda only sends one record to the function. Only available for stream sources (DynamoDB and Kinesis) and SQS standard queues. -* `event_source_arn` - (Required) The event source ARN - can be a Kinesis stream, DynamoDB stream, SQS queue or MSK cluster. +* `batch_size` - (Optional) The largest number of records that Lambda will retrieve from your event source at the time of invocation. Defaults to `100` for DynamoDB, Kinesis, MQ and MSK, `10` for SQS. +* `bisect_batch_on_function_error`: - (Optional) If the function returns an error, split the batch in two and retry. Only available for stream sources (DynamoDB and Kinesis). Defaults to `false`. +* `destination_config`: - (Optional) An Amazon SQS queue or Amazon SNS topic destination for failed records. Only available for stream sources (DynamoDB and Kinesis). Detailed below. * `enabled` - (Optional) Determines if the mapping will be enabled on creation. Defaults to `true`. +* `event_source_arn` - (Optional) The event source ARN - this is required for Kinesis stream, DynamoDB stream, SQS queue, MQ broker or MSK cluster. It is incompatible with a Self Managed Kafka source. * `function_name` - (Required) The name or the ARN of the Lambda function that will be subscribing to events. +* `function_response_types` - (Optional) A list of current response type enums applied to the event source mapping for [AWS Lambda checkpointing](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-batchfailurereporting). Only available for stream sources (DynamoDB and Kinesis). Valid values: `ReportBatchItemFailures`. +* `maximum_batching_window_in_seconds` - (Optional) The maximum amount of time to gather records before invoking the function, in seconds (between 0 and 300). Records will continue to buffer (or accumulate in the case of an SQS queue event source) until either `maximum_batching_window_in_seconds` expires or `batch_size` has been met. For streaming event sources, defaults to as soon as records are available in the stream. If the batch it reads from the stream/queue only has one record in it, Lambda only sends one record to the function. Only available for stream sources (DynamoDB and Kinesis) and SQS standard queues. +* `maximum_record_age_in_seconds`: - (Optional) The maximum age of a record that Lambda sends to a function for processing. Only available for stream sources (DynamoDB and Kinesis). Must be either -1 (forever, and the default value) or between 60 and 604800 (inclusive). +* `maximum_retry_attempts`: - (Optional) The maximum number of times to retry when the function returns an error. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of -1 (forever), maximum of 10000. +* `parallelization_factor`: - (Optional) The number of batches to process from each shard concurrently. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of 1, maximum of 10. +* `queues` - (Optional) The name of the Amazon MQ broker destination queue to consume. Only available for MQ sources. A single queue name must be specified. +* `self_managed_event_source`: - (Optional) For Self Managed Kafka sources, the location of the self managed cluster. If set, configuration must also include `source_access_configuration`. Detailed below. +* `source_access_configuration`: (Optional) For Self Managed Kafka sources, the access configuration for the source. If set, configuration must also include `self_managed_event_source`. Detailed below. * `starting_position` - (Optional) The position in the stream where AWS Lambda should start reading. Must be one of `AT_TIMESTAMP` (Kinesis only), `LATEST` or `TRIM_HORIZON` if getting events from Kinesis, DynamoDB or MSK. Must not be provided if getting events from SQS. More information about these positions can be found in the [AWS DynamoDB Streams API Reference](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html) and [AWS Kinesis API Reference](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_GetShardIterator.html#Kinesis-GetShardIterator-request-ShardIteratorType). * `starting_position_timestamp` - (Optional) A timestamp in [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8) of the data record which to start reading when using `starting_position` set to `AT_TIMESTAMP`. If a record with this exact timestamp does not exist, the next later record is chosen. If the timestamp is older than the current trim horizon, the oldest available record is chosen. -* `parallelization_factor`: - (Optional) The number of batches to process from each shard concurrently. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of 1, maximum of 10. -* `maximum_retry_attempts`: - (Optional) The maximum number of times to retry when the function returns an error. Only available for stream sources (DynamoDB and Kinesis). Minimum and default of -1 (forever), maximum of 10000. -* `maximum_record_age_in_seconds`: - (Optional) The maximum age of a record that Lambda sends to a function for processing. Only available for stream sources (DynamoDB and Kinesis). Must be either -1 (forever, and the default value) or between 60 and 604800 (inclusive). -* `bisect_batch_on_function_error`: - (Optional) If the function returns an error, split the batch in two and retry. Only available for stream sources (DynamoDB and Kinesis). Defaults to `false`. * `topics` - (Optional) The name of the Kafka topics. Only available for MSK sources. A single topic name must be specified. -* `destination_config`: - (Optional) An Amazon SQS queue or Amazon SNS topic destination for failed records. Only available for stream sources (DynamoDB and Kinesis). Detailed below. +* `tumbling_window_in_seconds` - (Optional) The duration in seconds of a processing window for [AWS Lambda streaming analytics](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html#services-kinesis-windows). The range is between 1 second up to 900 seconds. Only available for stream sources (DynamoDB and Kinesis). ### destination_config Configuration Block @@ -79,6 +115,15 @@ resource "aws_lambda_event_source_mapping" "example" { * `destination_arn` - (Required) The Amazon Resource Name (ARN) of the destination resource. +### self_managed_event_source Configuration Block + +* `endpoints` - (Required) A map of endpoints for the self managed source. For Kafka self-managed sources, the key should be `KAFKA_BOOTSTRAP_SERVERS` and the value should be a string with a comma separated list of broker endpoints. + +### source_access_configuration Configuration Block + +* `type` - (Required) The type of this configuration. For Self Managed Kafka you will need to supply blocks for type `VPC_SUBNET` and `VPC_SECURITY_GROUP`. +* `uri` - (Required) The URI for this configuration. For type `VPC_SUBNET` the value should be `subnet:subnet_id` where `subnet_id` is the value you would find in an aws_subnet resource's id attribute. For type `VPC_SECURITY_GROUP` the value should be `security_group:security_group_id` where `security_group_id` is the value you would find in an aws_security_group resource's id attribute. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/launch_configuration.html.markdown b/website/docs/r/launch_configuration.html.markdown index f99ce052dba..ac2fed58bfd 100644 --- a/website/docs/r/launch_configuration.html.markdown +++ b/website/docs/r/launch_configuration.html.markdown @@ -176,12 +176,13 @@ to understand the implications of using these attributes. The `root_block_device` mapping supports the following: -* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, `"gp3"`, `"st1"`, `"sc1"` or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. +* `throughput` - (Optional) The throughput (MiBps) to provision for a `gp3` volume. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). * `encrypted` - (Optional) Whether the volume should be encrypted or not. (Default: `false`). @@ -193,12 +194,13 @@ Each `ebs_block_device` supports the following: * `device_name` - (Required) The name of the device to mount. * `snapshot_id` - (Optional) The Snapshot ID to mount. -* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, +* `volume_type` - (Optional) The type of volume. Can be `"standard"`, `"gp2"`, `"gp3"`, `"st1"`, `"sc1"` or `"io1"`. (Default: `"standard"`). * `volume_size` - (Optional) The size of the volume in gigabytes. * `iops` - (Optional) The amount of provisioned [IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html). This must be set with a `volume_type` of `"io1"`. +* `throughput` - (Optional) The throughput (MiBps) to provision for a `gp3` volume. * `delete_on_termination` - (Optional) Whether the volume should be destroyed on instance termination (Default: `true`). * `encrypted` - (Optional) Whether the volume should be encrypted or not. Do not use this option if you are using `snapshot_id` as the encrypted flag will be determined by the snapshot. (Default: `false`). diff --git a/website/docs/r/lb_target_group.html.markdown b/website/docs/r/lb_target_group.html.markdown index 37829a3ed78..32790955784 100644 --- a/website/docs/r/lb_target_group.html.markdown +++ b/website/docs/r/lb_target_group.html.markdown @@ -60,7 +60,7 @@ The following arguments are supported: * `deregistration_delay` - (Optional) Amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused. The range is 0-3600 seconds. The default value is 300 seconds. * `health_check` - (Optional, Maximum of 1) Health Check configuration block. Detailed below. -* `lambda_multi_value_headers_enabled` - (Optional) Whether the request and response headers exchanged between the load balancer and the Lambda function include arrays of values or strings. Only applies when `target_type` is `lambda`. +* `lambda_multi_value_headers_enabled` - (Optional) Whether the request and response headers exchanged between the load balancer and the Lambda function include arrays of values or strings. Only applies when `target_type` is `lambda`. Default is `false`. * `load_balancing_algorithm_type` - (Optional) Determines how the load balancer selects targets when routing requests. Only applicable for Application Load Balancer Target Groups. The value is `round_robin` or `least_outstanding_requests`. The default is `round_robin`. * `name_prefix` - (Optional, Forces new resource) Creates a unique name beginning with the specified prefix. Conflicts with `name`. Cannot be longer than 6 characters. * `name` - (Optional, Forces new resource) Name of the target group. If omitted, Terraform will assign a random, unique name. diff --git a/website/docs/r/msk_cluster.html.markdown b/website/docs/r/msk_cluster.html.markdown index 378af634f0f..cb6a219ef02 100644 --- a/website/docs/r/msk_cluster.html.markdown +++ b/website/docs/r/msk_cluster.html.markdown @@ -191,6 +191,7 @@ The following arguments are supported: #### client_authentication sasl Argument Reference +* `iam` - (Optional) Enables IAM client authentication. Defaults to `false`. * `scram` - (Optional) Enables SCRAM client authentication via AWS Secrets Manager. Defaults to `false`. #### client_authentication tls Argument Reference @@ -254,9 +255,10 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: * `arn` - Amazon Resource Name (ARN) of the MSK cluster. -* `bootstrap_brokers` - A comma separated list of one or more hostname:port pairs of kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `PLAINTEXT` or `TLS_PLAINTEXT`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. -* `bootstrap_brokers_sasl_scram` - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity using SASL/SCRAM to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication` is set to `sasl`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. -* `bootstrap_brokers_tls` - A comma separated list of one or more DNS names (or IPs) and TLS port pairs kafka brokers suitable to boostrap connectivity to the kafka cluster. Only contains value if `client_broker` encryption in transit is set to `TLS_PLAINTEXT` or `TLS`. The returned values are sorted alphbetically. The AWS API may not return all endpoints, so this value is not guaranteed to be stable across applies. +* `bootstrap_brokers` - Comma separated list of one or more hostname:port pairs of kafka brokers suitable to bootstrap connectivity to the kafka cluster. Contains a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `PLAINTEXT` or `TLS_PLAINTEXT`. The resource sorts values alphabetically. AWS may not always return all endpoints so this value is not guaranteed to be stable across applies. +* `bootstrap_brokers_sasl_iam` - One or more DNS names (or IP addresses) and SASL IAM port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9098`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication.0.sasl.0.iam` is set to `true`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. +* `bootstrap_brokers_sasl_scram` - One or more DNS names (or IP addresses) and SASL SCRAM port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9096`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS` and `client_authentication.0.sasl.0.scram` is set to `true`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. +* `bootstrap_brokers_tls` - One or more DNS names (or IP addresses) and TLS port pairs. For example, `b-1.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094,b-2.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094,b-3.exampleClusterName.abcde.c2.kafka.us-east-1.amazonaws.com:9094`. This attribute will have a value if `encryption_info.0.encryption_in_transit.0.client_broker` is set to `TLS_PLAINTEXT` or `TLS`. The resource sorts the list alphabetically. AWS may not always return all endpoints so the values may not be stable across applies. * `current_version` - Current version of the MSK Cluster used for updates, e.g. `K13V1IB3VIYZZH` * `encryption_info.0.encryption_at_rest_kms_key_arn` - The ARN of the KMS key used for encryption at rest of the broker data volumes. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). diff --git a/website/docs/r/rds_global_cluster.html.markdown b/website/docs/r/rds_global_cluster.html.markdown index 8a47ab5876d..a418d42d4f5 100644 --- a/website/docs/r/rds_global_cluster.html.markdown +++ b/website/docs/r/rds_global_cluster.html.markdown @@ -14,7 +14,60 @@ More information about Aurora global databases can be found in the [Aurora User ## Example Usage -### New Global Cluster +### New MySQL Global Cluster + +```terraform +resource "aws_rds_global_cluster" "example" { + global_cluster_identifier = "global-test" + engine = "aurora" + engine_version = "5.6.mysql_aurora.1.22.2" + database_name = "example_db" +} + +resource "aws_rds_cluster" "primary" { + provider = aws.primary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + cluster_identifier = "test-primary-cluster" + master_username = "username" + master_password = "somepass123" + database_name = "example_db" + global_cluster_identifier = aws_rds_global_cluster.example.id + db_subnet_group_name = "default" +} + +resource "aws_rds_cluster_instance" "primary" { + provider = aws.primary + identifier = "test-primary-cluster-instance" + cluster_identifier = aws_rds_cluster.primary.id + instance_class = "db.r4.large" + db_subnet_group_name = "default" +} + +resource "aws_rds_cluster" "secondary" { + provider = aws.secondary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + cluster_identifier = "test-secondary-cluster" + global_cluster_identifier = aws_rds_global_cluster.example.id + db_subnet_group_name = "default" +} + +resource "aws_rds_cluster_instance" "secondary" { + provider = aws.secondary + identifier = "test-secondary-cluster-instance" + cluster_identifier = aws_rds_cluster.secondary.id + instance_class = "db.r4.large" + db_subnet_group_name = "default" + + depends_on = [ + aws_rds_cluster_instance.primary + ] +} +``` + +### New PostgreSQL Global Cluster + ```terraform provider "aws" { @@ -24,45 +77,64 @@ provider "aws" { provider "aws" { alias = "secondary" - region = "us-west-2" + region = "us-east-1" } resource "aws_rds_global_cluster" "example" { - provider = aws.primary - - global_cluster_identifier = "example" + global_cluster_identifier = "global-test" + engine = "aurora-postgresql" + engine_version = "11.9" + database_name = "example_db" } resource "aws_rds_cluster" "primary" { - provider = aws.primary - - # ... other configuration ... + provider = aws.primary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + cluster_identifier = "test-primary-cluster" + master_username = "username" + master_password = "somepass123" + database_name = "example_db" global_cluster_identifier = aws_rds_global_cluster.example.id + db_subnet_group_name = "default" } resource "aws_rds_cluster_instance" "primary" { - provider = aws.primary - - # ... other configuration ... - cluster_identifier = aws_rds_cluster.primary.id + provider = aws.primary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + identifier = "test-primary-cluster-instance" + cluster_identifier = aws_rds_cluster.primary.id + instance_class = "db.r4.large" + db_subnet_group_name = "default" } resource "aws_rds_cluster" "secondary" { - depends_on = [aws_rds_cluster_instance.primary] - provider = aws.secondary - - # ... other configuration ... + provider = aws.secondary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + cluster_identifier = "test-secondary-cluster" global_cluster_identifier = aws_rds_global_cluster.example.id + skip_final_snapshot = true + db_subnet_group_name = "default" + + depends_on = [ + aws_rds_cluster_instance.primary + ] } resource "aws_rds_cluster_instance" "secondary" { - provider = aws.secondary - - # ... other configuration ... - cluster_identifier = aws_rds_cluster.secondary.id + provider = aws.secondary + engine = aws_rds_global_cluster.example.engine + engine_version = aws_rds_global_cluster.example.engine_version + identifier = "test-secondary-cluster-instance" + cluster_identifier = aws_rds_cluster.secondary.id + instance_class = "db.r4.large" + db_subnet_group_name = "default" } ``` + ### New Global Cluster From Existing DB Cluster ```terraform diff --git a/website/docs/r/schemas_discoverer.html.markdown b/website/docs/r/schemas_discoverer.html.markdown new file mode 100644 index 00000000000..2889b34ca60 --- /dev/null +++ b/website/docs/r/schemas_discoverer.html.markdown @@ -0,0 +1,51 @@ +--- +subcategory: "EventBridge Schemas" +layout: "aws" +page_title: "AWS: aws_schemas_discoverer" +description: |- + Provides an EventBridge Schema Discoverer resource. +--- + +# Resource: aws_schemas_discoverer + +Provides an EventBridge Schema Discoverer resource. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + + +## Example Usage + +```terraform +resource "aws_cloudwatch_event_bus" "messenger" { + name = "chat-messages" +} + +resource "aws_schemas_discoverer" "test" { + source_arn = aws_cloudwatch_event_bus.messenger.arn + description = "Auto discover event schemas" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `source_arn` - (Required) The ARN of the event bus to discover event schemas on. +* `description` - (Optional) The description of the discoverer. Maximum of 256 characters. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the discoverer. +* `id` - The ID of the discoverer. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). + +## Import + +EventBridge discoverers can be imported using the `id`, e.g. + +```console +$ terraform import aws_schemas_discoverer.test 123 +``` diff --git a/website/docs/r/schemas_registry.html.markdown b/website/docs/r/schemas_registry.html.markdown new file mode 100644 index 00000000000..d554f9b9b7c --- /dev/null +++ b/website/docs/r/schemas_registry.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "EventBridge Schemas" +layout: "aws" +page_title: "AWS: aws_schemas_registry" +description: |- + Provides an EventBridge Custom Schema Registry resource. +--- + +# Resource: aws_schemas_registry + +Provides an EventBridge Custom Schema Registry resource. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + +## Example Usage + +```terraform +resource "aws_schemas_registry" "test" { + name = "my_own_registry" + description = "A custom schema registry" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the custom event schema registry. Maximum of 64 characters consisting of lower case letters, upper case letters, 0-9, ., -, _. +* `description` - (Optional) The description of the discoverer. Maximum of 256 characters. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the discoverer. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). + +## Import + +EventBridge schema registries can be imported using the `name`, e.g. + +```console +$ terraform import aws_schemas_registry.test my_own_registry +``` diff --git a/website/docs/r/schemas_schema.html.markdown b/website/docs/r/schemas_schema.html.markdown new file mode 100644 index 00000000000..96ed3eb9b8f --- /dev/null +++ b/website/docs/r/schemas_schema.html.markdown @@ -0,0 +1,78 @@ +--- +subcategory: "EventBridge Schemas" +layout: "aws" +page_title: "AWS: aws_schemas_schema" +description: |- + Provides an EventBridge Schema resource. +--- + +# Resource: aws_schemas_schema + +Provides an EventBridge Schema resource. + +~> **Note:** EventBridge was formerly known as CloudWatch Events. The functionality is identical. + +## Example Usage + +```terraform +resource "aws_schemas_registry" "test" { + name = "my_own_registry" +} + +resource "aws_schemas_schema" "test" { + name = "my_schema" + registry_name = aws_schemas_registry.test.name + type = "OpenApi3" + description = "The schema definition for my event" + + content = jsonencode({ + "openapi" : "3.0.0", + "info" : { + "version" : "1.0.0", + "title" : "Event" + }, + "paths" : {}, + "components" : { + "schemas" : { + "Event" : { + "type" : "object", + "properties" : { + "name" : { + "type" : "string" + } + } + } + } + } + }) +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the schema. Maximum of 385 characters consisting of lower case letters, upper case letters, ., -, _, @. +* `content` - (Required) The schema specification. Must be a valid Open API 3.0 spec. +* `registry_name` - (Required) The name of the registry in which this schema belongs. +* `type` - (Required) The type of the schema. Valid values: `OpenApi3`. +* `description` - (Optional) The description of the schema. Maximum of 256 characters. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the discoverer. +* `last_modified` - The last modified date of the schema. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). +* `version` - The version of the schema. +* `version_created_date` - The created date of the version of the schema. + +## Import + +EventBridge schema can be imported using the `name` and `registry_name`, e.g. + +```console +$ terraform import aws_schemas_schema.test name/registry +``` diff --git a/website/docs/r/servicecatalog_budget_resource_association.html.markdown b/website/docs/r/servicecatalog_budget_resource_association.html.markdown new file mode 100644 index 00000000000..f1071ef69ef --- /dev/null +++ b/website/docs/r/servicecatalog_budget_resource_association.html.markdown @@ -0,0 +1,45 @@ +--- +subcategory: "Service Catalog" +layout: "aws" +page_title: "AWS: aws_servicecatalog_budget_resource_association" +description: |- + Manages a Service Catalog Budget Resource Association +--- + +# Resource: aws_servicecatalog_budget_resource_association + +Manages a Service Catalog Budget Resource Association. + +-> **Tip:** A "resource" is either a Service Catalog portfolio or product. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_servicecatalog_budget_resource_association" "example" { + budget_name = "budget-pjtvyakdlyo3m" + resource_id = "prod-dnigbtea24ste" +} +``` + +## Argument Reference + +The following arguments are required: + +* `budget_name` - (Required) Budget name. +* `resource_id` - (Required) Resource identifier. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Identifier of the association. + +## Import + +`aws_servicecatalog_budget_resource_association` can be imported using the budget name and resource ID, e.g. + +``` +$ terraform import aws_servicecatalog_budget_resource_association.example budget-pjtvyakdlyo3m:prod-dnigbtea24ste +``` diff --git a/website/docs/r/servicecatalog_principal_portfolio_association.html.markdown b/website/docs/r/servicecatalog_principal_portfolio_association.html.markdown new file mode 100644 index 00000000000..90c53712350 --- /dev/null +++ b/website/docs/r/servicecatalog_principal_portfolio_association.html.markdown @@ -0,0 +1,48 @@ +--- +subcategory: "Service Catalog" +layout: "aws" +page_title: "AWS: aws_servicecatalog_principal_portfolio_association" +description: |- + Manages a Service Catalog Principal Portfolio Association +--- + +# Resource: aws_servicecatalog_principal_portfolio_association + +Manages a Service Catalog Principal Portfolio Association. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_servicecatalog_principal_portfolio_association" "example" { + portfolio_id = "port-68656c6c6f" + principal_arn = "arn:aws:iam::123456789012:user/Eleanor" +} +``` + +## Argument Reference + +The following arguments are required: + +* `portfolio_id` - (Required) Portfolio identifier. +* `principal_arn` - (Required) Principal ARN. + +The following arguments are optional: + +* `accept_language` - (Optional) Language code. Valid values: `en` (English), `jp` (Japanese), `zh` (Chinese). Default value is `en`. +* `principal_type` - (Optional) Principal type. Setting this argument empty (e.g., `principal_type = ""`) will result in an error. Valid value is `IAM`. Default is `IAM`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Identifier of the association. + +## Import + +`aws_servicecatalog_principal_portfolio_association` can be imported using the accept language, principal ARN, and portfolio ID, separated by a comma, e.g. + +``` +$ terraform import aws_servicecatalog_principal_portfolio_association.example en,arn:aws:iam::123456789012:user/Eleanor,port-68656c6c6f +``` diff --git a/website/docs/r/servicecatalog_provisioning_artifact.html.markdown b/website/docs/r/servicecatalog_provisioning_artifact.html.markdown new file mode 100644 index 00000000000..9e508da64e6 --- /dev/null +++ b/website/docs/r/servicecatalog_provisioning_artifact.html.markdown @@ -0,0 +1,64 @@ +--- +subcategory: "Service Catalog" +layout: "aws" +page_title: "AWS: aws_servicecatalog_provisioning_artifact" +description: |- + Manages a Service Catalog Provisioning Artifact +--- + +# Resource: aws_servicecatalog_provisioning_artifact + +Manages a Service Catalog Provisioning Artifact for a specified product. + +-> A "provisioning artifact" is also referred to as a "version." + +~> **NOTE:** You cannot create a provisioning artifact for a product that was shared with you. + +~> **NOTE:** The user or role that use this resource must have the `cloudformation:GetTemplate` IAM policy permission. This policy permission is required when using the `template_physical_id` argument. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_servicecatalog_provisioning_artifact" "example" { + name = "example" + product_id = aws_servicecatalog_product.example.id + type = "CLOUD_FORMATION_TEMPLATE" + template_url = "https://${aws_s3_bucket.example.bucket_regional_domain_name}/${aws_s3_bucket_object.example.key}" +} +``` + +## Argument Reference + +The following arguments are required: + +* `product_id` - (Required) Identifier of the product. +* `template_physical_id` - (Required if `template_url` is not provided) Template source as the physical ID of the resource that contains the template. Currently only supports CloudFormation stack ARN. Specify the physical ID as `arn:[partition]:cloudformation:[region]:[account ID]:stack/[stack name]/[resource ID]`. +* `template_url` - (Required if `template_physical_id` is not provided) Template source as URL of the CloudFormation template in Amazon S3. + +The following arguments are optional: + +* `accept_language` - (Optional) Language code. Valid values: `en` (English), `jp` (Japanese), `zh` (Chinese). The default value is `en`. +* `active` - (Optional) Whether the product version is active. Inactive provisioning artifacts are invisible to end users. End users cannot launch or update a provisioned product from an inactive provisioning artifact. Default is `true`. +* `description` - (Optional) Description of the provisioning artifact (i.e., version), including how it differs from the previous provisioning artifact. +* `disable_template_validation` - (Optional) Whether AWS Service Catalog stops validating the specified provisioning artifact template even if it is invalid. +* `guidance` - (Optional) Information set by the administrator to provide guidance to end users about which provisioning artifacts to use. Valid values are `DEFAULT` and `DEPRECATED`. The default is `DEFAULT`. Users are able to make updates to a provisioned product of a deprecated version but cannot launch new provisioned products using a deprecated version. +* `name` - (Optional) Name of the provisioning artifact (for example, `v1`, `v2beta`). No spaces are allowed. +* `type` - (Optional) Type of provisioning artifact. Valid values: `CLOUD_FORMATION_TEMPLATE`, `MARKETPLACE_AMI`, `MARKETPLACE_CAR` (Marketplace Clusters and AWS Resources). + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `created_time` - Time when the provisioning artifact was created. +* `id` - Provisioning Artifact identifier and product identifier separated by a colon. +* `status` - Status of the provisioning artifact. + +## Import + +`aws_servicecatalog_provisioning_artifact` can be imported using the provisioning artifact ID and product ID separated by a colon, e.g. + +``` +$ terraform import aws_servicecatalog_provisioning_artifact.example pa-ij2b6lusy6dec:prod-el3an0rma3 +``` diff --git a/website/docs/r/servicecatalog_tag_option_resource_association.html.markdown b/website/docs/r/servicecatalog_tag_option_resource_association.html.markdown new file mode 100644 index 00000000000..bb9f001d426 --- /dev/null +++ b/website/docs/r/servicecatalog_tag_option_resource_association.html.markdown @@ -0,0 +1,49 @@ +--- +subcategory: "Service Catalog" +layout: "aws" +page_title: "AWS: aws_servicecatalog_tag_option_resource_association" +description: |- + Manages a Service Catalog Tag Option Resource Association +--- + +# Resource: aws_servicecatalog_tag_option_resource_association + +Manages a Service Catalog Tag Option Resource Association. + +-> **Tip:** A "resource" is either a Service Catalog portfolio or product. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_servicecatalog_tag_option_resource_association" "example" { + resource_id = "prod-dnigbtea24ste" + tag_option_id = "tag-pjtvyakdlyo3m" +} +``` + +## Argument Reference + +The following arguments are required: + +* `resource_id` - (Required) Resource identifier. +* `tag_option_id` - (Required) Tag Option identifier. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - Identifier of the association. +* `resource_arn` - ARN of the resource. +* `resource_created_time` - Creation time of the resource. +* `resource_description` - Description of the resource. +* `resource_name` - Description of the resource. + +## Import + +`aws_servicecatalog_tag_option_resource_association` can be imported using the tag option ID and resource ID, e.g. + +``` +$ terraform import aws_servicecatalog_tag_option_resource_association.example tag-pjtvyakdlyo3m:prod-dnigbtea24ste +``` diff --git a/website/docs/r/sns_topic.html.markdown b/website/docs/r/sns_topic.html.markdown index 3273f1e684e..785ef0b5e15 100644 --- a/website/docs/r/sns_topic.html.markdown +++ b/website/docs/r/sns_topic.html.markdown @@ -92,6 +92,9 @@ The following arguments are supported: * `sqs_success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this topic * `sqs_success_feedback_sample_rate` - (Optional) Percentage of success to sample * `sqs_failure_feedback_role_arn` - (Optional) IAM role for failure feedback +* `firehose_success_feedback_role_arn` - (Optional) The IAM role permitted to receive success feedback for this topic +* `firehose_success_feedback_sample_rate` - (Optional) Percentage of success to sample +* `firehose_failure_feedback_role_arn` - (Optional) IAM role for failure feedback * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ## Attributes Reference @@ -100,6 +103,7 @@ In addition to all arguments above, the following attributes are exported: * `id` - The ARN of the SNS topic * `arn` - The ARN of the SNS topic, as a more obvious property (clone of id) +* `owner` - The AWS Account ID of the SNS topic owner * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). ## Import diff --git a/website/docs/r/ssoadmin_managed_policy_attachment.html.markdown b/website/docs/r/ssoadmin_managed_policy_attachment.html.markdown index 29441ce6844..e162225aa5b 100644 --- a/website/docs/r/ssoadmin_managed_policy_attachment.html.markdown +++ b/website/docs/r/ssoadmin_managed_policy_attachment.html.markdown @@ -23,7 +23,7 @@ resource "aws_ssoadmin_permission_set" "example" { } resource "aws_ssoadmin_managed_policy_attachment" "example" { - instance_arn = aws_ssoadmin_permission_set.example.instance_arn + instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0] managed_policy_arn = "arn:aws:iam::aws:policy/AlexaForBusinessDeviceSetup" permission_set_arn = aws_ssoadmin_permission_set.example.arn } diff --git a/website/docs/r/wafv2_rule_group.html.markdown b/website/docs/r/wafv2_rule_group.html.markdown index ebc521a8021..a8432218bf1 100644 --- a/website/docs/r/wafv2_rule_group.html.markdown +++ b/website/docs/r/wafv2_rule_group.html.markdown @@ -307,11 +307,49 @@ Each `rule` supports the following arguments: The `action` block supports the following arguments: -~> **NOTE:** One of `allow`, `block`, or `count`, expressed as an empty configuration block `{}`, is required when specifying an `action` +~> **NOTE:** One of `allow`, `block`, or `count`, is required when specifying an `action`. -* `allow` - (Optional) Instructs AWS WAF to allow the web request. -* `block` - (Optional) Instructs AWS WAF to block the web request. -* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. +* `allow` - (Optional) Instructs AWS WAF to allow the web request. See [Allow](#action) below for details. +* `block` - (Optional) Instructs AWS WAF to block the web request. See [Block](#block) below for details. +* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. See [Count](#count) below for details. + +### Allow + +The `allow` block supports the following arguments: + +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [Custom Request Handling](#custom-request-handling) below for details. + +### Block + +The `block` block supports the following arguments: + +* `custom_response` - (Optional) Defines a custom response for the web request. See [Custom Response](#custom-response) below for details. + +### Count + +The `count` block supports the following arguments: + +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [Custom Request Handling](#custom-request-handling) below for details. + +### Custom Request Handling + +The `custom_request_handling` block supports the following arguments: + +* `insert_header` - (Required) The `insert_header` blocks used to define HTTP headers added to the request. See [Custom HTTP Header](#custom-http-header) below for details. + +### Custom Response + +The `custom_response` block supports the following arguments: + +* `response_code` - (Optional) The HTTP status code to return to the client. +* `response_header` - (Optional) The `response_header` blocks used to define the HTTP response headers added to the response. See [Custom HTTP Header](#custom-http-header) below for details. + +### Custom HTTP Header + +Each block supports the following arguments. Duplicate header names are not allowed: + +* `name` - The name of the custom header. For custom request header insertion, when AWS WAF inserts the header into the request, it prefixes this name `x-amzn-waf-`, to avoid confusion with the headers that are already in the request. For example, for the header name `sample`, AWS WAF inserts the header `x-amzn-waf-sample`. +* `value` - The value of the custom header. ### Statement diff --git a/website/docs/r/wafv2_web_acl.html.markdown b/website/docs/r/wafv2_web_acl.html.markdown index de0f98a1745..14640f4fc27 100644 --- a/website/docs/r/wafv2_web_acl.html.markdown +++ b/website/docs/r/wafv2_web_acl.html.markdown @@ -269,8 +269,8 @@ The `default_action` block supports the following arguments: ~> **NOTE:** One of `allow` or `block`, expressed as an empty configuration block `{}`, is required when specifying a `default_action` -* `allow` - (Optional) Specifies that AWS WAF should allow requests by default. -* `block` - (Optional) Specifies that AWS WAF should block requests by default. +* `allow` - (Optional) Specifies that AWS WAF should allow requests by default. See [Allow](#action) below for details. +* `block` - (Optional) Specifies that AWS WAF should block requests by default. See [Block](#block) below for details. ### Rules @@ -289,11 +289,11 @@ Each `rule` supports the following arguments: The `action` block supports the following arguments: -~> **NOTE:** One of `allow`, `block`, or `count`, expressed as an empty configuration block `{}`, is required when specifying an `action` +~> **NOTE:** One of `allow`, `block`, or `count`, is required when specifying an `action`. -* `allow` - (Optional) Instructs AWS WAF to allow the web request. Configure as an empty block `{}`. -* `block` - (Optional) Instructs AWS WAF to block the web request. Configure as an empty block `{}`. -* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. Configure as an empty block `{}`. +* `allow` - (Optional) Instructs AWS WAF to allow the web request. See [Allow](#action) below for details. +* `block` - (Optional) Instructs AWS WAF to block the web request. See [Block](#block) below for details. +* `count` - (Optional) Instructs AWS WAF to count the web request and allow it. See [Count](#count) below for details. ### Override Action @@ -304,6 +304,44 @@ The `override_action` block supports the following arguments: * `count` - (Optional) Override the rule action setting to count (i.e. only count matches). Configured as an empty block `{}`. * `none` - (Optional) Don't override the rule action setting. Configured as an empty block `{}`. +### Allow + +The `allow` block supports the following arguments: + +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [Custom Request Handling](#custom-request-handling) below for details. + +### Block + +The `block` block supports the following arguments: + +* `custom_response` - (Optional) Defines a custom response for the web request. See [Custom Response](#custom-response) below for details. + +### Count + +The `count` block supports the following arguments: + +* `custom_request_handling` - (Optional) Defines custom handling for the web request. See [Custom Request Handling](#custom-request-handling) below for details. + +### Custom Request Handling + +The `custom_request_handling` block supports the following arguments: + +* `insert_header` - (Required) The `insert_header` blocks used to define HTTP headers added to the request. See [Custom HTTP Header](#custom-http-header) below for details. + +### Custom Response + +The `custom_response` block supports the following arguments: + +* `response_code` - (Optional) The HTTP status code to return to the client. +* `response_header` - (Optional) The `response_header` blocks used to define the HTTP response headers added to the response. See [Custom HTTP Header](#custom-http-header) below for details. + +### Custom HTTP Header + +Each block supports the following arguments. Duplicate header names are not allowed: + +* `name` - The name of the custom header. For custom request header insertion, when AWS WAF inserts the header into the request, it prefixes this name `x-amzn-waf-`, to avoid confusion with the headers that are already in the request. For example, for the header name `sample`, AWS WAF inserts the header `x-amzn-waf-sample`. +* `value` - The value of the custom header. + ### Statement The processing guidance for a Rule, used by AWS WAF to determine whether a web request matches the rule. See the [documentation](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statements-list.html) for more information.